00:00:00.001 Started by upstream project "autotest-per-patch" build number 132721 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.095 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.138 Using shallow fetch with depth 1 00:00:00.138 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.138 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.644 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.656 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.668 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.668 > git config core.sparsecheckout # timeout=10 00:00:04.679 > git read-tree -mu HEAD # timeout=10 00:00:04.695 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.720 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.721 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.868 [Pipeline] Start of Pipeline 00:00:04.885 [Pipeline] library 00:00:04.887 Loading library shm_lib@master 00:00:04.887 Library shm_lib@master is cached. Copying from home. 00:00:04.903 [Pipeline] node 00:00:04.915 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:04.917 [Pipeline] { 00:00:04.926 [Pipeline] catchError 00:00:04.927 [Pipeline] { 00:00:04.939 [Pipeline] wrap 00:00:04.947 [Pipeline] { 00:00:04.955 [Pipeline] stage 00:00:04.956 [Pipeline] { (Prologue) 00:00:04.975 [Pipeline] echo 00:00:04.977 Node: VM-host-SM9 00:00:04.984 [Pipeline] cleanWs 00:00:04.994 [WS-CLEANUP] Deleting project workspace... 00:00:04.994 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.999 [WS-CLEANUP] done 00:00:05.221 [Pipeline] setCustomBuildProperty 00:00:05.332 [Pipeline] httpRequest 00:00:05.733 [Pipeline] echo 00:00:05.735 Sorcerer 10.211.164.20 is alive 00:00:05.744 [Pipeline] retry 00:00:05.746 [Pipeline] { 00:00:05.759 [Pipeline] httpRequest 00:00:05.763 HttpMethod: GET 00:00:05.764 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.764 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.772 Response Code: HTTP/1.1 200 OK 00:00:05.772 Success: Status code 200 is in the accepted range: 200,404 00:00:05.773 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.723 [Pipeline] } 00:00:07.735 [Pipeline] // retry 00:00:07.743 [Pipeline] sh 00:00:08.020 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.033 [Pipeline] httpRequest 00:00:08.881 [Pipeline] echo 00:00:08.883 Sorcerer 10.211.164.20 is alive 00:00:08.891 [Pipeline] retry 00:00:08.893 [Pipeline] { 00:00:08.906 [Pipeline] httpRequest 00:00:08.910 HttpMethod: GET 00:00:08.910 URL: http://10.211.164.20/packages/spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:00:08.911 Sending request to url: http://10.211.164.20/packages/spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:00:08.924 Response Code: HTTP/1.1 200 OK 00:00:08.925 Success: Status code 200 is in the accepted range: 200,404 00:00:08.925 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:00:47.539 [Pipeline] } 00:00:47.558 [Pipeline] // retry 00:00:47.565 [Pipeline] sh 00:00:47.872 + tar --no-same-owner -xf spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:00:50.444 [Pipeline] sh 00:00:50.721 + git -C spdk log --oneline -n5 00:00:50.721 b82e5bf03 bdev/compress: Simplify split logic for unmap operation 00:00:50.721 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:50.721 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:50.721 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:50.721 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:50.740 [Pipeline] writeFile 00:00:50.755 [Pipeline] sh 00:00:51.034 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:51.045 [Pipeline] sh 00:00:51.326 + cat autorun-spdk.conf 00:00:51.326 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.326 SPDK_TEST_NVMF=1 00:00:51.326 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.326 SPDK_TEST_URING=1 00:00:51.326 SPDK_TEST_USDT=1 00:00:51.326 SPDK_RUN_UBSAN=1 00:00:51.326 NET_TYPE=virt 00:00:51.326 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.332 RUN_NIGHTLY=0 00:00:51.334 [Pipeline] } 00:00:51.348 [Pipeline] // stage 00:00:51.363 [Pipeline] stage 00:00:51.366 [Pipeline] { (Run VM) 00:00:51.378 [Pipeline] sh 00:00:51.659 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:51.659 + echo 'Start stage prepare_nvme.sh' 00:00:51.659 Start stage prepare_nvme.sh 00:00:51.659 + [[ -n 5 ]] 00:00:51.659 + disk_prefix=ex5 00:00:51.659 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:00:51.659 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:51.659 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:00:51.659 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.659 ++ SPDK_TEST_NVMF=1 00:00:51.659 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.659 ++ SPDK_TEST_URING=1 00:00:51.659 ++ SPDK_TEST_USDT=1 00:00:51.659 ++ SPDK_RUN_UBSAN=1 00:00:51.659 ++ NET_TYPE=virt 00:00:51.659 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.659 ++ RUN_NIGHTLY=0 00:00:51.659 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:51.659 + nvme_files=() 00:00:51.659 + declare -A nvme_files 00:00:51.659 + backend_dir=/var/lib/libvirt/images/backends 00:00:51.659 + nvme_files['nvme.img']=5G 00:00:51.659 + nvme_files['nvme-cmb.img']=5G 00:00:51.659 + nvme_files['nvme-multi0.img']=4G 00:00:51.659 + nvme_files['nvme-multi1.img']=4G 00:00:51.659 + nvme_files['nvme-multi2.img']=4G 00:00:51.659 + nvme_files['nvme-openstack.img']=8G 00:00:51.659 + nvme_files['nvme-zns.img']=5G 00:00:51.659 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:51.659 + (( SPDK_TEST_FTL == 1 )) 00:00:51.659 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:51.659 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:51.659 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:51.659 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:51.659 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:51.659 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:51.659 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:51.659 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.659 + for nvme in "${!nvme_files[@]}" 00:00:51.659 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:51.918 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.918 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:51.918 + echo 'End stage prepare_nvme.sh' 00:00:51.918 End stage prepare_nvme.sh 00:00:51.930 [Pipeline] sh 00:00:52.213 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:52.213 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:52.213 00:00:52.213 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:00:52.213 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:00:52.213 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:52.213 HELP=0 00:00:52.213 DRY_RUN=0 00:00:52.213 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:52.213 NVME_DISKS_TYPE=nvme,nvme, 00:00:52.213 NVME_AUTO_CREATE=0 00:00:52.213 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:52.213 NVME_CMB=,, 00:00:52.213 NVME_PMR=,, 00:00:52.213 NVME_ZNS=,, 00:00:52.213 NVME_MS=,, 00:00:52.213 NVME_FDP=,, 00:00:52.213 SPDK_VAGRANT_DISTRO=fedora39 00:00:52.213 SPDK_VAGRANT_VMCPU=10 00:00:52.213 SPDK_VAGRANT_VMRAM=12288 00:00:52.213 SPDK_VAGRANT_PROVIDER=libvirt 00:00:52.213 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:52.213 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:52.213 SPDK_OPENSTACK_NETWORK=0 00:00:52.213 VAGRANT_PACKAGE_BOX=0 00:00:52.213 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:52.213 FORCE_DISTRO=true 00:00:52.213 VAGRANT_BOX_VERSION= 00:00:52.213 EXTRA_VAGRANTFILES= 00:00:52.213 NIC_MODEL=e1000 00:00:52.213 00:00:52.213 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:00:52.213 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:54.750 Bringing machine 'default' up with 'libvirt' provider... 00:00:55.318 ==> default: Creating image (snapshot of base box volume). 00:00:55.318 ==> default: Creating domain with the following settings... 00:00:55.318 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733486921_8f4d53fb269bfad6a10c 00:00:55.318 ==> default: -- Domain type: kvm 00:00:55.318 ==> default: -- Cpus: 10 00:00:55.318 ==> default: -- Feature: acpi 00:00:55.318 ==> default: -- Feature: apic 00:00:55.318 ==> default: -- Feature: pae 00:00:55.577 ==> default: -- Memory: 12288M 00:00:55.577 ==> default: -- Memory Backing: hugepages: 00:00:55.577 ==> default: -- Management MAC: 00:00:55.577 ==> default: -- Loader: 00:00:55.577 ==> default: -- Nvram: 00:00:55.577 ==> default: -- Base box: spdk/fedora39 00:00:55.577 ==> default: -- Storage pool: default 00:00:55.577 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733486921_8f4d53fb269bfad6a10c.img (20G) 00:00:55.577 ==> default: -- Volume Cache: default 00:00:55.577 ==> default: -- Kernel: 00:00:55.577 ==> default: -- Initrd: 00:00:55.577 ==> default: -- Graphics Type: vnc 00:00:55.577 ==> default: -- Graphics Port: -1 00:00:55.577 ==> default: -- Graphics IP: 127.0.0.1 00:00:55.577 ==> default: -- Graphics Password: Not defined 00:00:55.577 ==> default: -- Video Type: cirrus 00:00:55.577 ==> default: -- Video VRAM: 9216 00:00:55.577 ==> default: -- Sound Type: 00:00:55.577 ==> default: -- Keymap: en-us 00:00:55.577 ==> default: -- TPM Path: 00:00:55.577 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:55.577 ==> default: -- Command line args: 00:00:55.577 ==> default: -> value=-device, 00:00:55.577 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:55.577 ==> default: -> value=-drive, 00:00:55.577 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:55.577 ==> default: -> value=-device, 00:00:55.577 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.577 ==> default: -> value=-device, 00:00:55.577 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:55.577 ==> default: -> value=-drive, 00:00:55.577 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:55.577 ==> default: -> value=-device, 00:00:55.577 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.577 ==> default: -> value=-drive, 00:00:55.577 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:55.577 ==> default: -> value=-device, 00:00:55.577 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.577 ==> default: -> value=-drive, 00:00:55.577 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:55.577 ==> default: -> value=-device, 00:00:55.577 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.577 ==> default: Creating shared folders metadata... 00:00:55.577 ==> default: Starting domain. 00:00:56.957 ==> default: Waiting for domain to get an IP address... 00:01:11.835 ==> default: Waiting for SSH to become available... 00:01:13.212 ==> default: Configuring and enabling network interfaces... 00:01:17.404 default: SSH address: 192.168.121.178:22 00:01:17.404 default: SSH username: vagrant 00:01:17.404 default: SSH auth method: private key 00:01:19.308 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.414 ==> default: Mounting SSHFS shared folder... 00:01:28.790 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.790 ==> default: Checking Mount.. 00:01:29.725 ==> default: Folder Successfully Mounted! 00:01:29.725 ==> default: Running provisioner: file... 00:01:30.688 default: ~/.gitconfig => .gitconfig 00:01:30.947 00:01:30.947 SUCCESS! 00:01:30.947 00:01:30.947 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:30.947 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.947 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:30.947 00:01:30.956 [Pipeline] } 00:01:30.972 [Pipeline] // stage 00:01:30.982 [Pipeline] dir 00:01:30.983 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:01:30.985 [Pipeline] { 00:01:30.998 [Pipeline] catchError 00:01:30.999 [Pipeline] { 00:01:31.013 [Pipeline] sh 00:01:31.292 + vagrant ssh-config --host vagrant 00:01:31.293 + sed -ne /^Host/,$p 00:01:31.293 + tee ssh_conf 00:01:34.581 Host vagrant 00:01:34.581 HostName 192.168.121.178 00:01:34.581 User vagrant 00:01:34.581 Port 22 00:01:34.581 UserKnownHostsFile /dev/null 00:01:34.581 StrictHostKeyChecking no 00:01:34.581 PasswordAuthentication no 00:01:34.581 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:34.581 IdentitiesOnly yes 00:01:34.581 LogLevel FATAL 00:01:34.581 ForwardAgent yes 00:01:34.581 ForwardX11 yes 00:01:34.581 00:01:34.596 [Pipeline] withEnv 00:01:34.598 [Pipeline] { 00:01:34.612 [Pipeline] sh 00:01:34.893 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:34.893 source /etc/os-release 00:01:34.893 [[ -e /image.version ]] && img=$(< /image.version) 00:01:34.893 # Minimal, systemd-like check. 00:01:34.893 if [[ -e /.dockerenv ]]; then 00:01:34.893 # Clear garbage from the node's name: 00:01:34.893 # agt-er_autotest_547-896 -> autotest_547-896 00:01:34.893 # $HOSTNAME is the actual container id 00:01:34.893 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:34.893 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:34.893 # We can assume this is a mount from a host where container is running, 00:01:34.893 # so fetch its hostname to easily identify the target swarm worker. 00:01:34.893 container="$(< /etc/hostname) ($agent)" 00:01:34.893 else 00:01:34.893 # Fallback 00:01:34.893 container=$agent 00:01:34.893 fi 00:01:34.893 fi 00:01:34.893 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:34.893 00:01:35.166 [Pipeline] } 00:01:35.183 [Pipeline] // withEnv 00:01:35.192 [Pipeline] setCustomBuildProperty 00:01:35.208 [Pipeline] stage 00:01:35.210 [Pipeline] { (Tests) 00:01:35.228 [Pipeline] sh 00:01:35.509 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.783 [Pipeline] sh 00:01:36.064 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:36.339 [Pipeline] timeout 00:01:36.340 Timeout set to expire in 1 hr 0 min 00:01:36.342 [Pipeline] { 00:01:36.356 [Pipeline] sh 00:01:36.636 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:37.204 HEAD is now at b82e5bf03 bdev/compress: Simplify split logic for unmap operation 00:01:37.217 [Pipeline] sh 00:01:37.501 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.774 [Pipeline] sh 00:01:38.056 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:38.333 [Pipeline] sh 00:01:38.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:38.879 ++ readlink -f spdk_repo 00:01:38.879 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.879 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.879 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.879 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.879 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.879 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.879 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.879 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:38.879 + cd /home/vagrant/spdk_repo 00:01:38.879 + source /etc/os-release 00:01:38.879 ++ NAME='Fedora Linux' 00:01:38.879 ++ VERSION='39 (Cloud Edition)' 00:01:38.879 ++ ID=fedora 00:01:38.879 ++ VERSION_ID=39 00:01:38.879 ++ VERSION_CODENAME= 00:01:38.879 ++ PLATFORM_ID=platform:f39 00:01:38.879 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:38.879 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.879 ++ LOGO=fedora-logo-icon 00:01:38.879 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:38.879 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.879 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:38.879 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.879 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.879 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.879 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:38.879 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.879 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:38.879 ++ SUPPORT_END=2024-11-12 00:01:38.879 ++ VARIANT='Cloud Edition' 00:01:38.879 ++ VARIANT_ID=cloud 00:01:38.879 + uname -a 00:01:38.879 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:38.879 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:39.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:39.447 Hugepages 00:01:39.447 node hugesize free / total 00:01:39.447 node0 1048576kB 0 / 0 00:01:39.447 node0 2048kB 0 / 0 00:01:39.447 00:01:39.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.447 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:39.447 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:39.447 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:39.447 + rm -f /tmp/spdk-ld-path 00:01:39.447 + source autorun-spdk.conf 00:01:39.447 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.447 ++ SPDK_TEST_NVMF=1 00:01:39.447 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.447 ++ SPDK_TEST_URING=1 00:01:39.447 ++ SPDK_TEST_USDT=1 00:01:39.447 ++ SPDK_RUN_UBSAN=1 00:01:39.447 ++ NET_TYPE=virt 00:01:39.447 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.447 ++ RUN_NIGHTLY=0 00:01:39.447 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.447 + [[ -n '' ]] 00:01:39.447 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:39.447 + for M in /var/spdk/build-*-manifest.txt 00:01:39.447 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:39.447 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.447 + for M in /var/spdk/build-*-manifest.txt 00:01:39.447 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.447 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.447 + for M in /var/spdk/build-*-manifest.txt 00:01:39.447 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.447 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.447 ++ uname 00:01:39.447 + [[ Linux == \L\i\n\u\x ]] 00:01:39.447 + sudo dmesg -T 00:01:39.447 + sudo dmesg --clear 00:01:39.447 + dmesg_pid=5253 00:01:39.447 + [[ Fedora Linux == FreeBSD ]] 00:01:39.447 + sudo dmesg -Tw 00:01:39.448 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.448 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.448 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.448 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.448 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.448 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.448 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.448 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.448 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.448 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.448 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.448 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.448 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.448 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.448 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.707 12:09:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:39.707 12:09:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.707 12:09:26 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:39.707 12:09:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.707 12:09:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.707 12:09:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:39.707 12:09:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.707 12:09:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.707 12:09:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.707 12:09:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.707 12:09:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.707 12:09:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.707 12:09:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.707 12:09:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.707 12:09:26 -- paths/export.sh@5 -- $ export PATH 00:01:39.707 12:09:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.708 12:09:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:39.708 12:09:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:39.708 12:09:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733486966.XXXXXX 00:01:39.708 12:09:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733486966.Kcfszw 00:01:39.708 12:09:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:39.708 12:09:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:39.708 12:09:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:39.708 12:09:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.708 12:09:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.708 12:09:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:39.708 12:09:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.708 12:09:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.708 12:09:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:39.708 12:09:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:39.708 12:09:26 -- pm/common@17 -- $ local monitor 00:01:39.708 12:09:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.708 12:09:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.708 12:09:26 -- pm/common@21 -- $ date +%s 00:01:39.708 12:09:26 -- pm/common@25 -- $ sleep 1 00:01:39.708 12:09:26 -- pm/common@21 -- $ date +%s 00:01:39.708 12:09:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733486966 00:01:39.708 12:09:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733486966 00:01:39.708 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733486966_collect-cpu-load.pm.log 00:01:39.708 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733486966_collect-vmstat.pm.log 00:01:40.646 12:09:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:40.646 12:09:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.646 12:09:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.646 12:09:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:40.646 12:09:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.646 Fri Dec 6 12:09:27 PM UTC 2024 00:01:40.646 12:09:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.646 v25.01-pre-304-gb82e5bf03 00:01:40.646 12:09:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:40.646 12:09:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.646 12:09:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.646 12:09:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.646 12:09:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.646 12:09:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.646 ************************************ 00:01:40.646 START TEST ubsan 00:01:40.646 ************************************ 00:01:40.646 using ubsan 00:01:40.646 12:09:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.646 00:01:40.646 real 0m0.000s 00:01:40.646 user 0m0.000s 00:01:40.646 sys 0m0.000s 00:01:40.646 12:09:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.646 12:09:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.646 ************************************ 00:01:40.646 END TEST ubsan 00:01:40.646 ************************************ 00:01:40.905 12:09:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.906 12:09:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.906 12:09:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.906 12:09:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.906 12:09:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.906 12:09:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.906 12:09:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.906 12:09:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.906 12:09:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:40.906 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:40.906 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:41.473 Using 'verbs' RDMA provider 00:01:57.292 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:09.510 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:09.510 Creating mk/config.mk...done. 00:02:09.510 Creating mk/cc.flags.mk...done. 00:02:09.510 Type 'make' to build. 00:02:09.510 12:09:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:09.510 12:09:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.510 12:09:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.510 12:09:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.510 ************************************ 00:02:09.510 START TEST make 00:02:09.510 ************************************ 00:02:09.510 12:09:55 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:09.768 make[1]: Nothing to be done for 'all'. 00:02:21.971 The Meson build system 00:02:21.971 Version: 1.5.0 00:02:21.971 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:21.971 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:21.971 Build type: native build 00:02:21.971 Program cat found: YES (/usr/bin/cat) 00:02:21.971 Project name: DPDK 00:02:21.971 Project version: 24.03.0 00:02:21.971 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.971 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.971 Host machine cpu family: x86_64 00:02:21.971 Host machine cpu: x86_64 00:02:21.971 Message: ## Building in Developer Mode ## 00:02:21.971 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.971 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.971 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.971 Program python3 found: YES (/usr/bin/python3) 00:02:21.971 Program cat found: YES (/usr/bin/cat) 00:02:21.971 Compiler for C supports arguments -march=native: YES 00:02:21.971 Checking for size of "void *" : 8 00:02:21.971 Checking for size of "void *" : 8 (cached) 00:02:21.971 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.971 Library m found: YES 00:02:21.971 Library numa found: YES 00:02:21.971 Has header "numaif.h" : YES 00:02:21.971 Library fdt found: NO 00:02:21.971 Library execinfo found: NO 00:02:21.971 Has header "execinfo.h" : YES 00:02:21.971 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.971 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.971 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.971 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.972 Run-time dependency openssl found: YES 3.1.1 00:02:21.972 Run-time dependency libpcap found: YES 1.10.4 00:02:21.972 Has header "pcap.h" with dependency libpcap: YES 00:02:21.972 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.972 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.972 Compiler for C supports arguments -Wformat: YES 00:02:21.972 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.972 Compiler for C supports arguments -Wformat-security: NO 00:02:21.972 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.972 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.972 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.972 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.972 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.972 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.972 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.972 Compiler for C supports arguments -Wundef: YES 00:02:21.972 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.972 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.972 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.972 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.972 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.972 Program objdump found: YES (/usr/bin/objdump) 00:02:21.972 Compiler for C supports arguments -mavx512f: YES 00:02:21.972 Checking if "AVX512 checking" compiles: YES 00:02:21.972 Fetching value of define "__SSE4_2__" : 1 00:02:21.972 Fetching value of define "__AES__" : 1 00:02:21.972 Fetching value of define "__AVX__" : 1 00:02:21.972 Fetching value of define "__AVX2__" : 1 00:02:21.972 Fetching value of define "__AVX512BW__" : (undefined) 00:02:21.972 Fetching value of define "__AVX512CD__" : (undefined) 00:02:21.972 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:21.972 Fetching value of define "__AVX512F__" : (undefined) 00:02:21.972 Fetching value of define "__AVX512VL__" : (undefined) 00:02:21.972 Fetching value of define "__PCLMUL__" : 1 00:02:21.972 Fetching value of define "__RDRND__" : 1 00:02:21.972 Fetching value of define "__RDSEED__" : 1 00:02:21.972 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.972 Fetching value of define "__znver1__" : (undefined) 00:02:21.972 Fetching value of define "__znver2__" : (undefined) 00:02:21.972 Fetching value of define "__znver3__" : (undefined) 00:02:21.972 Fetching value of define "__znver4__" : (undefined) 00:02:21.972 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.972 Message: lib/log: Defining dependency "log" 00:02:21.972 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.972 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.972 Checking for function "getentropy" : NO 00:02:21.972 Message: lib/eal: Defining dependency "eal" 00:02:21.972 Message: lib/ring: Defining dependency "ring" 00:02:21.972 Message: lib/rcu: Defining dependency "rcu" 00:02:21.972 Message: lib/mempool: Defining dependency "mempool" 00:02:21.972 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.972 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.972 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.972 Compiler for C supports arguments -mpclmul: YES 00:02:21.972 Compiler for C supports arguments -maes: YES 00:02:21.972 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.972 Compiler for C supports arguments -mavx512bw: YES 00:02:21.972 Compiler for C supports arguments -mavx512dq: YES 00:02:21.972 Compiler for C supports arguments -mavx512vl: YES 00:02:21.972 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.972 Compiler for C supports arguments -mavx2: YES 00:02:21.972 Compiler for C supports arguments -mavx: YES 00:02:21.972 Message: lib/net: Defining dependency "net" 00:02:21.972 Message: lib/meter: Defining dependency "meter" 00:02:21.972 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.972 Message: lib/pci: Defining dependency "pci" 00:02:21.972 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.972 Message: lib/hash: Defining dependency "hash" 00:02:21.972 Message: lib/timer: Defining dependency "timer" 00:02:21.972 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.972 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.972 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.972 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.972 Message: lib/power: Defining dependency "power" 00:02:21.972 Message: lib/reorder: Defining dependency "reorder" 00:02:21.972 Message: lib/security: Defining dependency "security" 00:02:21.972 Has header "linux/userfaultfd.h" : YES 00:02:21.972 Has header "linux/vduse.h" : YES 00:02:21.972 Message: lib/vhost: Defining dependency "vhost" 00:02:21.972 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.972 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.972 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.972 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.972 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.972 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.972 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.972 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.972 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.972 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.972 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.972 Configuring doxy-api-html.conf using configuration 00:02:21.972 Configuring doxy-api-man.conf using configuration 00:02:21.972 Program mandb found: YES (/usr/bin/mandb) 00:02:21.972 Program sphinx-build found: NO 00:02:21.972 Configuring rte_build_config.h using configuration 00:02:21.972 Message: 00:02:21.972 ================= 00:02:21.972 Applications Enabled 00:02:21.972 ================= 00:02:21.972 00:02:21.972 apps: 00:02:21.972 00:02:21.972 00:02:21.972 Message: 00:02:21.972 ================= 00:02:21.972 Libraries Enabled 00:02:21.972 ================= 00:02:21.972 00:02:21.972 libs: 00:02:21.972 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.972 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.972 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.972 00:02:21.972 Message: 00:02:21.972 =============== 00:02:21.972 Drivers Enabled 00:02:21.972 =============== 00:02:21.972 00:02:21.972 common: 00:02:21.972 00:02:21.972 bus: 00:02:21.972 pci, vdev, 00:02:21.972 mempool: 00:02:21.972 ring, 00:02:21.972 dma: 00:02:21.972 00:02:21.972 net: 00:02:21.972 00:02:21.972 crypto: 00:02:21.972 00:02:21.972 compress: 00:02:21.972 00:02:21.972 vdpa: 00:02:21.972 00:02:21.972 00:02:21.972 Message: 00:02:21.972 ================= 00:02:21.972 Content Skipped 00:02:21.972 ================= 00:02:21.972 00:02:21.972 apps: 00:02:21.972 dumpcap: explicitly disabled via build config 00:02:21.972 graph: explicitly disabled via build config 00:02:21.972 pdump: explicitly disabled via build config 00:02:21.972 proc-info: explicitly disabled via build config 00:02:21.972 test-acl: explicitly disabled via build config 00:02:21.972 test-bbdev: explicitly disabled via build config 00:02:21.972 test-cmdline: explicitly disabled via build config 00:02:21.972 test-compress-perf: explicitly disabled via build config 00:02:21.972 test-crypto-perf: explicitly disabled via build config 00:02:21.972 test-dma-perf: explicitly disabled via build config 00:02:21.972 test-eventdev: explicitly disabled via build config 00:02:21.972 test-fib: explicitly disabled via build config 00:02:21.972 test-flow-perf: explicitly disabled via build config 00:02:21.972 test-gpudev: explicitly disabled via build config 00:02:21.972 test-mldev: explicitly disabled via build config 00:02:21.972 test-pipeline: explicitly disabled via build config 00:02:21.972 test-pmd: explicitly disabled via build config 00:02:21.972 test-regex: explicitly disabled via build config 00:02:21.972 test-sad: explicitly disabled via build config 00:02:21.972 test-security-perf: explicitly disabled via build config 00:02:21.972 00:02:21.972 libs: 00:02:21.972 argparse: explicitly disabled via build config 00:02:21.972 metrics: explicitly disabled via build config 00:02:21.972 acl: explicitly disabled via build config 00:02:21.972 bbdev: explicitly disabled via build config 00:02:21.972 bitratestats: explicitly disabled via build config 00:02:21.972 bpf: explicitly disabled via build config 00:02:21.973 cfgfile: explicitly disabled via build config 00:02:21.973 distributor: explicitly disabled via build config 00:02:21.973 efd: explicitly disabled via build config 00:02:21.973 eventdev: explicitly disabled via build config 00:02:21.973 dispatcher: explicitly disabled via build config 00:02:21.973 gpudev: explicitly disabled via build config 00:02:21.973 gro: explicitly disabled via build config 00:02:21.973 gso: explicitly disabled via build config 00:02:21.973 ip_frag: explicitly disabled via build config 00:02:21.973 jobstats: explicitly disabled via build config 00:02:21.973 latencystats: explicitly disabled via build config 00:02:21.973 lpm: explicitly disabled via build config 00:02:21.973 member: explicitly disabled via build config 00:02:21.973 pcapng: explicitly disabled via build config 00:02:21.973 rawdev: explicitly disabled via build config 00:02:21.973 regexdev: explicitly disabled via build config 00:02:21.973 mldev: explicitly disabled via build config 00:02:21.973 rib: explicitly disabled via build config 00:02:21.973 sched: explicitly disabled via build config 00:02:21.973 stack: explicitly disabled via build config 00:02:21.973 ipsec: explicitly disabled via build config 00:02:21.973 pdcp: explicitly disabled via build config 00:02:21.973 fib: explicitly disabled via build config 00:02:21.973 port: explicitly disabled via build config 00:02:21.973 pdump: explicitly disabled via build config 00:02:21.973 table: explicitly disabled via build config 00:02:21.973 pipeline: explicitly disabled via build config 00:02:21.973 graph: explicitly disabled via build config 00:02:21.973 node: explicitly disabled via build config 00:02:21.973 00:02:21.973 drivers: 00:02:21.973 common/cpt: not in enabled drivers build config 00:02:21.973 common/dpaax: not in enabled drivers build config 00:02:21.973 common/iavf: not in enabled drivers build config 00:02:21.973 common/idpf: not in enabled drivers build config 00:02:21.973 common/ionic: not in enabled drivers build config 00:02:21.973 common/mvep: not in enabled drivers build config 00:02:21.973 common/octeontx: not in enabled drivers build config 00:02:21.973 bus/auxiliary: not in enabled drivers build config 00:02:21.973 bus/cdx: not in enabled drivers build config 00:02:21.973 bus/dpaa: not in enabled drivers build config 00:02:21.973 bus/fslmc: not in enabled drivers build config 00:02:21.973 bus/ifpga: not in enabled drivers build config 00:02:21.973 bus/platform: not in enabled drivers build config 00:02:21.973 bus/uacce: not in enabled drivers build config 00:02:21.973 bus/vmbus: not in enabled drivers build config 00:02:21.973 common/cnxk: not in enabled drivers build config 00:02:21.973 common/mlx5: not in enabled drivers build config 00:02:21.973 common/nfp: not in enabled drivers build config 00:02:21.973 common/nitrox: not in enabled drivers build config 00:02:21.973 common/qat: not in enabled drivers build config 00:02:21.973 common/sfc_efx: not in enabled drivers build config 00:02:21.973 mempool/bucket: not in enabled drivers build config 00:02:21.973 mempool/cnxk: not in enabled drivers build config 00:02:21.973 mempool/dpaa: not in enabled drivers build config 00:02:21.973 mempool/dpaa2: not in enabled drivers build config 00:02:21.973 mempool/octeontx: not in enabled drivers build config 00:02:21.973 mempool/stack: not in enabled drivers build config 00:02:21.973 dma/cnxk: not in enabled drivers build config 00:02:21.973 dma/dpaa: not in enabled drivers build config 00:02:21.973 dma/dpaa2: not in enabled drivers build config 00:02:21.973 dma/hisilicon: not in enabled drivers build config 00:02:21.973 dma/idxd: not in enabled drivers build config 00:02:21.973 dma/ioat: not in enabled drivers build config 00:02:21.973 dma/skeleton: not in enabled drivers build config 00:02:21.973 net/af_packet: not in enabled drivers build config 00:02:21.973 net/af_xdp: not in enabled drivers build config 00:02:21.973 net/ark: not in enabled drivers build config 00:02:21.973 net/atlantic: not in enabled drivers build config 00:02:21.973 net/avp: not in enabled drivers build config 00:02:21.973 net/axgbe: not in enabled drivers build config 00:02:21.973 net/bnx2x: not in enabled drivers build config 00:02:21.973 net/bnxt: not in enabled drivers build config 00:02:21.973 net/bonding: not in enabled drivers build config 00:02:21.973 net/cnxk: not in enabled drivers build config 00:02:21.973 net/cpfl: not in enabled drivers build config 00:02:21.973 net/cxgbe: not in enabled drivers build config 00:02:21.973 net/dpaa: not in enabled drivers build config 00:02:21.973 net/dpaa2: not in enabled drivers build config 00:02:21.973 net/e1000: not in enabled drivers build config 00:02:21.973 net/ena: not in enabled drivers build config 00:02:21.973 net/enetc: not in enabled drivers build config 00:02:21.973 net/enetfec: not in enabled drivers build config 00:02:21.973 net/enic: not in enabled drivers build config 00:02:21.973 net/failsafe: not in enabled drivers build config 00:02:21.973 net/fm10k: not in enabled drivers build config 00:02:21.973 net/gve: not in enabled drivers build config 00:02:21.973 net/hinic: not in enabled drivers build config 00:02:21.973 net/hns3: not in enabled drivers build config 00:02:21.973 net/i40e: not in enabled drivers build config 00:02:21.973 net/iavf: not in enabled drivers build config 00:02:21.973 net/ice: not in enabled drivers build config 00:02:21.973 net/idpf: not in enabled drivers build config 00:02:21.973 net/igc: not in enabled drivers build config 00:02:21.973 net/ionic: not in enabled drivers build config 00:02:21.973 net/ipn3ke: not in enabled drivers build config 00:02:21.973 net/ixgbe: not in enabled drivers build config 00:02:21.973 net/mana: not in enabled drivers build config 00:02:21.973 net/memif: not in enabled drivers build config 00:02:21.973 net/mlx4: not in enabled drivers build config 00:02:21.973 net/mlx5: not in enabled drivers build config 00:02:21.973 net/mvneta: not in enabled drivers build config 00:02:21.973 net/mvpp2: not in enabled drivers build config 00:02:21.973 net/netvsc: not in enabled drivers build config 00:02:21.973 net/nfb: not in enabled drivers build config 00:02:21.973 net/nfp: not in enabled drivers build config 00:02:21.973 net/ngbe: not in enabled drivers build config 00:02:21.973 net/null: not in enabled drivers build config 00:02:21.973 net/octeontx: not in enabled drivers build config 00:02:21.973 net/octeon_ep: not in enabled drivers build config 00:02:21.973 net/pcap: not in enabled drivers build config 00:02:21.973 net/pfe: not in enabled drivers build config 00:02:21.973 net/qede: not in enabled drivers build config 00:02:21.973 net/ring: not in enabled drivers build config 00:02:21.973 net/sfc: not in enabled drivers build config 00:02:21.973 net/softnic: not in enabled drivers build config 00:02:21.973 net/tap: not in enabled drivers build config 00:02:21.973 net/thunderx: not in enabled drivers build config 00:02:21.973 net/txgbe: not in enabled drivers build config 00:02:21.973 net/vdev_netvsc: not in enabled drivers build config 00:02:21.973 net/vhost: not in enabled drivers build config 00:02:21.973 net/virtio: not in enabled drivers build config 00:02:21.973 net/vmxnet3: not in enabled drivers build config 00:02:21.973 raw/*: missing internal dependency, "rawdev" 00:02:21.973 crypto/armv8: not in enabled drivers build config 00:02:21.973 crypto/bcmfs: not in enabled drivers build config 00:02:21.973 crypto/caam_jr: not in enabled drivers build config 00:02:21.973 crypto/ccp: not in enabled drivers build config 00:02:21.973 crypto/cnxk: not in enabled drivers build config 00:02:21.973 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.973 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.973 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.973 crypto/mlx5: not in enabled drivers build config 00:02:21.973 crypto/mvsam: not in enabled drivers build config 00:02:21.973 crypto/nitrox: not in enabled drivers build config 00:02:21.973 crypto/null: not in enabled drivers build config 00:02:21.973 crypto/octeontx: not in enabled drivers build config 00:02:21.973 crypto/openssl: not in enabled drivers build config 00:02:21.973 crypto/scheduler: not in enabled drivers build config 00:02:21.973 crypto/uadk: not in enabled drivers build config 00:02:21.973 crypto/virtio: not in enabled drivers build config 00:02:21.973 compress/isal: not in enabled drivers build config 00:02:21.973 compress/mlx5: not in enabled drivers build config 00:02:21.973 compress/nitrox: not in enabled drivers build config 00:02:21.973 compress/octeontx: not in enabled drivers build config 00:02:21.973 compress/zlib: not in enabled drivers build config 00:02:21.973 regex/*: missing internal dependency, "regexdev" 00:02:21.973 ml/*: missing internal dependency, "mldev" 00:02:21.973 vdpa/ifc: not in enabled drivers build config 00:02:21.973 vdpa/mlx5: not in enabled drivers build config 00:02:21.973 vdpa/nfp: not in enabled drivers build config 00:02:21.973 vdpa/sfc: not in enabled drivers build config 00:02:21.973 event/*: missing internal dependency, "eventdev" 00:02:21.973 baseband/*: missing internal dependency, "bbdev" 00:02:21.973 gpu/*: missing internal dependency, "gpudev" 00:02:21.973 00:02:21.973 00:02:21.973 Build targets in project: 85 00:02:21.973 00:02:21.973 DPDK 24.03.0 00:02:21.973 00:02:21.973 User defined options 00:02:21.973 buildtype : debug 00:02:21.973 default_library : shared 00:02:21.973 libdir : lib 00:02:21.973 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.973 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.973 c_link_args : 00:02:21.973 cpu_instruction_set: native 00:02:21.973 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:21.973 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:21.973 enable_docs : false 00:02:21.973 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:21.973 enable_kmods : false 00:02:21.973 max_lcores : 128 00:02:21.973 tests : false 00:02:21.973 00:02:21.973 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.541 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:22.541 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.541 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:22.541 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.541 [4/268] Linking static target lib/librte_kvargs.a 00:02:22.541 [5/268] Linking static target lib/librte_log.a 00:02:22.799 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.058 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.316 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.316 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.316 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.316 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.316 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.575 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.575 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.575 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.575 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.575 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.575 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.575 [19/268] Linking static target lib/librte_telemetry.a 00:02:23.833 [20/268] Linking target lib/librte_log.so.24.1 00:02:23.833 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.092 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.092 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.092 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:24.092 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.351 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.351 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.351 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.351 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.609 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.609 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.609 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.609 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.609 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.609 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.867 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.867 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.867 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.125 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.125 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.125 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.125 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.125 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.383 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.383 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.383 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.642 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.642 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.642 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.900 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.900 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.900 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.158 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.158 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.158 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.416 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.416 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.416 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.674 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.674 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.674 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.674 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.932 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.932 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.932 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.189 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.189 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.448 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.448 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.448 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.448 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.744 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.744 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.744 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.744 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.744 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.744 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.744 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.001 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.001 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.001 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.260 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.260 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.517 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.517 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.517 [86/268] Linking static target lib/librte_eal.a 00:02:28.517 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.517 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.774 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.774 [90/268] Linking static target lib/librte_rcu.a 00:02:28.774 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.774 [92/268] Linking static target lib/librte_ring.a 00:02:28.774 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.031 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.031 [95/268] Linking static target lib/librte_mempool.a 00:02:29.031 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.031 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.031 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.031 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.289 [100/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.289 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.289 [102/268] Linking static target lib/librte_mbuf.a 00:02:29.289 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.289 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.547 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.547 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:29.547 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.806 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:29.806 [109/268] Linking static target lib/librte_net.a 00:02:30.064 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.064 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.064 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.064 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.064 [114/268] Linking static target lib/librte_meter.a 00:02:30.064 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.323 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.323 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.323 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.582 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.841 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.100 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.100 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.358 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.358 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.358 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.358 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:31.617 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.617 [128/268] Linking static target lib/librte_pci.a 00:02:31.617 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.617 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.617 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.617 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.617 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.617 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.617 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.876 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.876 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.876 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.876 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.876 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.876 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.876 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.876 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:31.876 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.134 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.393 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.393 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:32.652 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.652 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.652 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.652 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.652 [152/268] Linking static target lib/librte_ethdev.a 00:02:32.652 [153/268] Linking static target lib/librte_cmdline.a 00:02:32.652 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.652 [155/268] Linking static target lib/librte_timer.a 00:02:32.652 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.219 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.219 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.219 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.219 [160/268] Linking static target lib/librte_compressdev.a 00:02:33.219 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.220 [162/268] Linking static target lib/librte_hash.a 00:02:33.478 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.478 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.478 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:33.737 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:33.737 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.737 [168/268] Linking static target lib/librte_dmadev.a 00:02:33.737 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.996 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.996 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.996 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.996 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:33.996 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.262 [175/268] Linking static target lib/librte_cryptodev.a 00:02:34.262 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.262 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.559 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.559 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.559 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.847 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.847 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.847 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.847 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.115 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.115 [186/268] Linking static target lib/librte_power.a 00:02:35.115 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.115 [188/268] Linking static target lib/librte_reorder.a 00:02:35.373 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.373 [190/268] Linking static target lib/librte_security.a 00:02:35.630 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.630 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.630 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.888 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.888 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.147 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.405 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.405 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.405 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.405 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.664 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:36.664 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.232 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.232 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.232 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.232 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.232 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.232 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.491 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.491 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.491 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.491 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.749 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.749 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.749 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.749 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.749 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.749 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.749 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:37.749 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:37.749 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.749 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.008 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.008 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.008 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.008 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.008 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:38.267 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.833 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.833 [230/268] Linking static target lib/librte_vhost.a 00:02:39.400 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.659 [232/268] Linking target lib/librte_eal.so.24.1 00:02:39.659 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:39.659 [234/268] Linking target lib/librte_ring.so.24.1 00:02:39.659 [235/268] Linking target lib/librte_timer.so.24.1 00:02:39.659 [236/268] Linking target lib/librte_meter.so.24.1 00:02:39.659 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:39.659 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:39.659 [239/268] Linking target lib/librte_pci.so.24.1 00:02:39.918 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:39.918 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:39.918 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:39.918 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:39.918 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:39.918 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:39.918 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:39.918 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.177 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:40.177 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.177 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:40.177 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:40.177 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.177 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.177 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:40.177 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:40.436 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:40.436 [257/268] Linking target lib/librte_net.so.24.1 00:02:40.436 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:40.436 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:40.436 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:40.436 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:40.436 [262/268] Linking target lib/librte_hash.so.24.1 00:02:40.436 [263/268] Linking target lib/librte_security.so.24.1 00:02:40.436 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:40.696 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.696 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:40.696 [267/268] Linking target lib/librte_power.so.24.1 00:02:40.696 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:40.696 INFO: autodetecting backend as ninja 00:02:40.696 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.631 CC lib/log/log.o 00:03:02.631 CC lib/log/log_deprecated.o 00:03:02.631 CC lib/log/log_flags.o 00:03:02.631 CC lib/ut_mock/mock.o 00:03:02.631 CC lib/ut/ut.o 00:03:02.631 LIB libspdk_ut_mock.a 00:03:02.631 LIB libspdk_log.a 00:03:02.631 LIB libspdk_ut.a 00:03:02.631 SO libspdk_ut_mock.so.6.0 00:03:02.631 SO libspdk_ut.so.2.0 00:03:02.631 SO libspdk_log.so.7.1 00:03:02.631 SYMLINK libspdk_ut.so 00:03:02.631 SYMLINK libspdk_ut_mock.so 00:03:02.631 SYMLINK libspdk_log.so 00:03:02.631 CC lib/ioat/ioat.o 00:03:02.631 CC lib/dma/dma.o 00:03:02.631 CXX lib/trace_parser/trace.o 00:03:02.631 CC lib/util/base64.o 00:03:02.631 CC lib/util/bit_array.o 00:03:02.631 CC lib/util/cpuset.o 00:03:02.631 CC lib/util/crc16.o 00:03:02.631 CC lib/util/crc32.o 00:03:02.631 CC lib/util/crc32c.o 00:03:02.631 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.631 CC lib/util/crc32_ieee.o 00:03:02.631 CC lib/util/crc64.o 00:03:02.631 CC lib/vfio_user/host/vfio_user.o 00:03:02.631 CC lib/util/dif.o 00:03:02.631 LIB libspdk_dma.a 00:03:02.631 CC lib/util/fd.o 00:03:02.631 CC lib/util/fd_group.o 00:03:02.631 SO libspdk_dma.so.5.0 00:03:02.631 LIB libspdk_ioat.a 00:03:02.631 CC lib/util/file.o 00:03:02.631 SYMLINK libspdk_dma.so 00:03:02.631 CC lib/util/hexlify.o 00:03:02.631 CC lib/util/iov.o 00:03:02.631 SO libspdk_ioat.so.7.0 00:03:02.631 SYMLINK libspdk_ioat.so 00:03:02.631 CC lib/util/math.o 00:03:02.631 CC lib/util/net.o 00:03:02.631 CC lib/util/pipe.o 00:03:02.631 LIB libspdk_vfio_user.a 00:03:02.631 SO libspdk_vfio_user.so.5.0 00:03:02.631 CC lib/util/strerror_tls.o 00:03:02.631 CC lib/util/string.o 00:03:02.631 SYMLINK libspdk_vfio_user.so 00:03:02.631 CC lib/util/uuid.o 00:03:02.631 CC lib/util/xor.o 00:03:02.631 CC lib/util/zipf.o 00:03:02.631 CC lib/util/md5.o 00:03:02.631 LIB libspdk_util.a 00:03:02.631 SO libspdk_util.so.10.1 00:03:02.631 SYMLINK libspdk_util.so 00:03:02.631 LIB libspdk_trace_parser.a 00:03:02.907 SO libspdk_trace_parser.so.6.0 00:03:02.907 SYMLINK libspdk_trace_parser.so 00:03:02.907 CC lib/rdma_utils/rdma_utils.o 00:03:02.907 CC lib/conf/conf.o 00:03:02.907 CC lib/json/json_parse.o 00:03:02.907 CC lib/json/json_util.o 00:03:02.907 CC lib/json/json_write.o 00:03:02.907 CC lib/vmd/vmd.o 00:03:02.907 CC lib/vmd/led.o 00:03:02.907 CC lib/env_dpdk/env.o 00:03:02.907 CC lib/env_dpdk/memory.o 00:03:02.907 CC lib/idxd/idxd.o 00:03:03.167 CC lib/env_dpdk/pci.o 00:03:03.167 CC lib/idxd/idxd_user.o 00:03:03.167 CC lib/env_dpdk/init.o 00:03:03.167 LIB libspdk_conf.a 00:03:03.167 SO libspdk_conf.so.6.0 00:03:03.167 LIB libspdk_rdma_utils.a 00:03:03.167 LIB libspdk_json.a 00:03:03.167 SO libspdk_rdma_utils.so.1.0 00:03:03.167 SYMLINK libspdk_conf.so 00:03:03.167 SO libspdk_json.so.6.0 00:03:03.167 CC lib/idxd/idxd_kernel.o 00:03:03.167 SYMLINK libspdk_rdma_utils.so 00:03:03.167 CC lib/env_dpdk/threads.o 00:03:03.167 SYMLINK libspdk_json.so 00:03:03.427 CC lib/env_dpdk/pci_ioat.o 00:03:03.427 CC lib/env_dpdk/pci_virtio.o 00:03:03.427 CC lib/env_dpdk/pci_vmd.o 00:03:03.427 CC lib/env_dpdk/pci_idxd.o 00:03:03.427 CC lib/env_dpdk/pci_event.o 00:03:03.427 CC lib/env_dpdk/sigbus_handler.o 00:03:03.427 LIB libspdk_idxd.a 00:03:03.427 CC lib/env_dpdk/pci_dpdk.o 00:03:03.427 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.427 SO libspdk_idxd.so.12.1 00:03:03.427 LIB libspdk_vmd.a 00:03:03.686 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.686 SO libspdk_vmd.so.6.0 00:03:03.686 SYMLINK libspdk_idxd.so 00:03:03.686 SYMLINK libspdk_vmd.so 00:03:03.686 CC lib/rdma_provider/common.o 00:03:03.686 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.686 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.686 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.686 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.686 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.945 LIB libspdk_rdma_provider.a 00:03:03.945 SO libspdk_rdma_provider.so.7.0 00:03:03.945 LIB libspdk_jsonrpc.a 00:03:04.205 SYMLINK libspdk_rdma_provider.so 00:03:04.205 SO libspdk_jsonrpc.so.6.0 00:03:04.205 SYMLINK libspdk_jsonrpc.so 00:03:04.205 LIB libspdk_env_dpdk.a 00:03:04.465 SO libspdk_env_dpdk.so.15.1 00:03:04.465 CC lib/rpc/rpc.o 00:03:04.465 SYMLINK libspdk_env_dpdk.so 00:03:04.725 LIB libspdk_rpc.a 00:03:04.725 SO libspdk_rpc.so.6.0 00:03:04.725 SYMLINK libspdk_rpc.so 00:03:04.984 CC lib/keyring/keyring_rpc.o 00:03:04.984 CC lib/keyring/keyring.o 00:03:04.984 CC lib/notify/notify_rpc.o 00:03:04.984 CC lib/notify/notify.o 00:03:04.984 CC lib/trace/trace.o 00:03:04.984 CC lib/trace/trace_flags.o 00:03:04.984 CC lib/trace/trace_rpc.o 00:03:04.984 LIB libspdk_notify.a 00:03:05.244 LIB libspdk_keyring.a 00:03:05.244 SO libspdk_notify.so.6.0 00:03:05.244 SO libspdk_keyring.so.2.0 00:03:05.244 SYMLINK libspdk_notify.so 00:03:05.244 LIB libspdk_trace.a 00:03:05.244 SYMLINK libspdk_keyring.so 00:03:05.244 SO libspdk_trace.so.11.0 00:03:05.244 SYMLINK libspdk_trace.so 00:03:05.503 CC lib/sock/sock.o 00:03:05.503 CC lib/sock/sock_rpc.o 00:03:05.503 CC lib/thread/thread.o 00:03:05.503 CC lib/thread/iobuf.o 00:03:06.072 LIB libspdk_sock.a 00:03:06.072 SO libspdk_sock.so.10.0 00:03:06.072 SYMLINK libspdk_sock.so 00:03:06.331 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:06.331 CC lib/nvme/nvme_ctrlr.o 00:03:06.331 CC lib/nvme/nvme_fabric.o 00:03:06.331 CC lib/nvme/nvme_ns_cmd.o 00:03:06.331 CC lib/nvme/nvme_ns.o 00:03:06.331 CC lib/nvme/nvme_pcie.o 00:03:06.331 CC lib/nvme/nvme_pcie_common.o 00:03:06.331 CC lib/nvme/nvme.o 00:03:06.331 CC lib/nvme/nvme_qpair.o 00:03:07.270 LIB libspdk_thread.a 00:03:07.270 SO libspdk_thread.so.11.0 00:03:07.270 CC lib/nvme/nvme_quirks.o 00:03:07.270 CC lib/nvme/nvme_transport.o 00:03:07.270 SYMLINK libspdk_thread.so 00:03:07.270 CC lib/nvme/nvme_discovery.o 00:03:07.270 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:07.530 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:07.530 CC lib/nvme/nvme_tcp.o 00:03:07.530 CC lib/accel/accel.o 00:03:07.530 CC lib/nvme/nvme_opal.o 00:03:07.530 CC lib/nvme/nvme_io_msg.o 00:03:07.789 CC lib/nvme/nvme_poll_group.o 00:03:08.048 CC lib/nvme/nvme_zns.o 00:03:08.048 CC lib/nvme/nvme_stubs.o 00:03:08.048 CC lib/accel/accel_rpc.o 00:03:08.307 CC lib/accel/accel_sw.o 00:03:08.307 CC lib/blob/blobstore.o 00:03:08.307 CC lib/blob/request.o 00:03:08.307 CC lib/init/json_config.o 00:03:08.567 CC lib/init/subsystem.o 00:03:08.568 CC lib/init/subsystem_rpc.o 00:03:08.568 CC lib/init/rpc.o 00:03:08.568 LIB libspdk_accel.a 00:03:08.568 CC lib/blob/zeroes.o 00:03:08.568 SO libspdk_accel.so.16.0 00:03:08.568 CC lib/blob/blob_bs_dev.o 00:03:08.568 SYMLINK libspdk_accel.so 00:03:08.568 CC lib/nvme/nvme_auth.o 00:03:08.826 LIB libspdk_init.a 00:03:08.826 CC lib/virtio/virtio.o 00:03:08.826 CC lib/virtio/virtio_vhost_user.o 00:03:08.826 CC lib/fsdev/fsdev.o 00:03:08.826 CC lib/nvme/nvme_cuse.o 00:03:08.826 SO libspdk_init.so.6.0 00:03:08.826 SYMLINK libspdk_init.so 00:03:08.826 CC lib/bdev/bdev.o 00:03:08.826 CC lib/virtio/virtio_vfio_user.o 00:03:09.084 CC lib/virtio/virtio_pci.o 00:03:09.084 CC lib/event/app.o 00:03:09.084 CC lib/event/reactor.o 00:03:09.084 CC lib/event/log_rpc.o 00:03:09.084 CC lib/fsdev/fsdev_io.o 00:03:09.342 CC lib/event/app_rpc.o 00:03:09.343 LIB libspdk_virtio.a 00:03:09.343 SO libspdk_virtio.so.7.0 00:03:09.343 SYMLINK libspdk_virtio.so 00:03:09.343 CC lib/event/scheduler_static.o 00:03:09.343 CC lib/bdev/bdev_rpc.o 00:03:09.343 CC lib/nvme/nvme_rdma.o 00:03:09.343 CC lib/bdev/bdev_zone.o 00:03:09.601 CC lib/fsdev/fsdev_rpc.o 00:03:09.601 CC lib/bdev/part.o 00:03:09.601 LIB libspdk_event.a 00:03:09.601 SO libspdk_event.so.14.0 00:03:09.601 LIB libspdk_fsdev.a 00:03:09.601 CC lib/bdev/scsi_nvme.o 00:03:09.601 SYMLINK libspdk_event.so 00:03:09.601 SO libspdk_fsdev.so.2.0 00:03:09.860 SYMLINK libspdk_fsdev.so 00:03:09.860 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:10.428 LIB libspdk_fuse_dispatcher.a 00:03:10.687 SO libspdk_fuse_dispatcher.so.1.0 00:03:10.687 SYMLINK libspdk_fuse_dispatcher.so 00:03:10.687 LIB libspdk_nvme.a 00:03:10.946 SO libspdk_nvme.so.15.0 00:03:11.204 LIB libspdk_blob.a 00:03:11.204 SYMLINK libspdk_nvme.so 00:03:11.204 SO libspdk_blob.so.12.0 00:03:11.204 SYMLINK libspdk_blob.so 00:03:11.462 LIB libspdk_bdev.a 00:03:11.462 SO libspdk_bdev.so.17.0 00:03:11.462 CC lib/lvol/lvol.o 00:03:11.462 CC lib/blobfs/blobfs.o 00:03:11.462 CC lib/blobfs/tree.o 00:03:11.720 SYMLINK libspdk_bdev.so 00:03:11.720 CC lib/scsi/dev.o 00:03:11.720 CC lib/scsi/lun.o 00:03:11.720 CC lib/scsi/scsi.o 00:03:11.720 CC lib/ublk/ublk.o 00:03:11.720 CC lib/nvmf/ctrlr.o 00:03:11.720 CC lib/nbd/nbd.o 00:03:11.720 CC lib/scsi/port.o 00:03:11.720 CC lib/ftl/ftl_core.o 00:03:11.978 CC lib/ftl/ftl_init.o 00:03:11.978 CC lib/ftl/ftl_layout.o 00:03:11.978 CC lib/ftl/ftl_debug.o 00:03:12.235 CC lib/scsi/scsi_bdev.o 00:03:12.235 CC lib/scsi/scsi_pr.o 00:03:12.235 CC lib/nbd/nbd_rpc.o 00:03:12.235 CC lib/ftl/ftl_io.o 00:03:12.235 CC lib/nvmf/ctrlr_discovery.o 00:03:12.235 CC lib/ftl/ftl_sb.o 00:03:12.492 LIB libspdk_nbd.a 00:03:12.492 SO libspdk_nbd.so.7.0 00:03:12.492 LIB libspdk_blobfs.a 00:03:12.492 CC lib/ublk/ublk_rpc.o 00:03:12.492 SO libspdk_blobfs.so.11.0 00:03:12.492 SYMLINK libspdk_nbd.so 00:03:12.492 CC lib/ftl/ftl_l2p.o 00:03:12.492 CC lib/nvmf/ctrlr_bdev.o 00:03:12.492 CC lib/scsi/scsi_rpc.o 00:03:12.492 LIB libspdk_lvol.a 00:03:12.492 CC lib/nvmf/subsystem.o 00:03:12.492 SYMLINK libspdk_blobfs.so 00:03:12.492 CC lib/nvmf/nvmf.o 00:03:12.492 SO libspdk_lvol.so.11.0 00:03:12.492 LIB libspdk_ublk.a 00:03:12.492 SYMLINK libspdk_lvol.so 00:03:12.492 CC lib/scsi/task.o 00:03:12.750 SO libspdk_ublk.so.3.0 00:03:12.750 CC lib/nvmf/nvmf_rpc.o 00:03:12.750 CC lib/ftl/ftl_l2p_flat.o 00:03:12.750 SYMLINK libspdk_ublk.so 00:03:12.750 CC lib/nvmf/transport.o 00:03:12.750 CC lib/ftl/ftl_nv_cache.o 00:03:12.750 CC lib/nvmf/tcp.o 00:03:12.750 LIB libspdk_scsi.a 00:03:13.008 CC lib/ftl/ftl_band.o 00:03:13.008 SO libspdk_scsi.so.9.0 00:03:13.008 SYMLINK libspdk_scsi.so 00:03:13.008 CC lib/ftl/ftl_band_ops.o 00:03:13.281 CC lib/nvmf/stubs.o 00:03:13.281 CC lib/nvmf/mdns_server.o 00:03:13.281 CC lib/nvmf/rdma.o 00:03:13.554 CC lib/ftl/ftl_writer.o 00:03:13.554 CC lib/ftl/ftl_rq.o 00:03:13.554 CC lib/ftl/ftl_reloc.o 00:03:13.554 CC lib/ftl/ftl_l2p_cache.o 00:03:13.554 CC lib/nvmf/auth.o 00:03:13.812 CC lib/ftl/ftl_p2l.o 00:03:13.812 CC lib/iscsi/conn.o 00:03:13.812 CC lib/iscsi/init_grp.o 00:03:13.812 CC lib/vhost/vhost.o 00:03:13.812 CC lib/iscsi/iscsi.o 00:03:13.812 CC lib/ftl/ftl_p2l_log.o 00:03:14.070 CC lib/iscsi/param.o 00:03:14.070 CC lib/iscsi/portal_grp.o 00:03:14.070 CC lib/ftl/mngt/ftl_mngt.o 00:03:14.328 CC lib/vhost/vhost_rpc.o 00:03:14.328 CC lib/vhost/vhost_scsi.o 00:03:14.328 CC lib/iscsi/tgt_node.o 00:03:14.328 CC lib/iscsi/iscsi_subsystem.o 00:03:14.328 CC lib/iscsi/iscsi_rpc.o 00:03:14.328 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:14.586 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:14.586 CC lib/iscsi/task.o 00:03:14.586 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.844 CC lib/vhost/vhost_blk.o 00:03:14.844 CC lib/vhost/rte_vhost_user.o 00:03:14.844 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.844 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:14.844 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:14.845 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:14.845 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.103 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.103 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.103 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.103 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.103 LIB libspdk_iscsi.a 00:03:15.363 CC lib/ftl/utils/ftl_conf.o 00:03:15.363 CC lib/ftl/utils/ftl_md.o 00:03:15.363 CC lib/ftl/utils/ftl_mempool.o 00:03:15.363 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.363 SO libspdk_iscsi.so.8.0 00:03:15.363 CC lib/ftl/utils/ftl_property.o 00:03:15.363 LIB libspdk_nvmf.a 00:03:15.622 SYMLINK libspdk_iscsi.so 00:03:15.622 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.622 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.622 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.622 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.622 SO libspdk_nvmf.so.20.0 00:03:15.622 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.622 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.622 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.622 SYMLINK libspdk_nvmf.so 00:03:15.881 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.881 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.881 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.881 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.881 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:15.881 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:15.881 CC lib/ftl/base/ftl_base_dev.o 00:03:15.881 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.881 LIB libspdk_vhost.a 00:03:15.881 CC lib/ftl/ftl_trace.o 00:03:15.881 SO libspdk_vhost.so.8.0 00:03:16.140 SYMLINK libspdk_vhost.so 00:03:16.140 LIB libspdk_ftl.a 00:03:16.399 SO libspdk_ftl.so.9.0 00:03:16.659 SYMLINK libspdk_ftl.so 00:03:16.918 CC module/env_dpdk/env_dpdk_rpc.o 00:03:16.918 CC module/keyring/file/keyring.o 00:03:16.918 CC module/keyring/linux/keyring.o 00:03:16.918 CC module/accel/error/accel_error.o 00:03:16.918 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:16.918 CC module/accel/dsa/accel_dsa.o 00:03:16.918 CC module/sock/posix/posix.o 00:03:16.918 CC module/accel/ioat/accel_ioat.o 00:03:16.918 CC module/fsdev/aio/fsdev_aio.o 00:03:16.918 CC module/blob/bdev/blob_bdev.o 00:03:17.177 LIB libspdk_env_dpdk_rpc.a 00:03:17.177 SO libspdk_env_dpdk_rpc.so.6.0 00:03:17.177 SYMLINK libspdk_env_dpdk_rpc.so 00:03:17.177 CC module/accel/error/accel_error_rpc.o 00:03:17.177 CC module/keyring/linux/keyring_rpc.o 00:03:17.177 CC module/keyring/file/keyring_rpc.o 00:03:17.177 CC module/accel/ioat/accel_ioat_rpc.o 00:03:17.177 LIB libspdk_scheduler_dynamic.a 00:03:17.177 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:17.177 SO libspdk_scheduler_dynamic.so.4.0 00:03:17.177 LIB libspdk_keyring_linux.a 00:03:17.177 LIB libspdk_accel_error.a 00:03:17.435 CC module/accel/dsa/accel_dsa_rpc.o 00:03:17.435 SYMLINK libspdk_scheduler_dynamic.so 00:03:17.435 LIB libspdk_blob_bdev.a 00:03:17.435 LIB libspdk_keyring_file.a 00:03:17.435 SO libspdk_keyring_linux.so.1.0 00:03:17.435 SO libspdk_accel_error.so.2.0 00:03:17.435 SO libspdk_blob_bdev.so.12.0 00:03:17.435 SO libspdk_keyring_file.so.2.0 00:03:17.435 LIB libspdk_accel_ioat.a 00:03:17.435 SO libspdk_accel_ioat.so.6.0 00:03:17.435 SYMLINK libspdk_keyring_linux.so 00:03:17.435 SYMLINK libspdk_accel_error.so 00:03:17.435 SYMLINK libspdk_blob_bdev.so 00:03:17.435 SYMLINK libspdk_keyring_file.so 00:03:17.435 SYMLINK libspdk_accel_ioat.so 00:03:17.435 LIB libspdk_accel_dsa.a 00:03:17.435 CC module/fsdev/aio/linux_aio_mgr.o 00:03:17.435 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:17.435 SO libspdk_accel_dsa.so.5.0 00:03:17.694 CC module/scheduler/gscheduler/gscheduler.o 00:03:17.694 CC module/sock/uring/uring.o 00:03:17.694 SYMLINK libspdk_accel_dsa.so 00:03:17.694 CC module/accel/iaa/accel_iaa.o 00:03:17.694 CC module/accel/iaa/accel_iaa_rpc.o 00:03:17.694 LIB libspdk_scheduler_dpdk_governor.a 00:03:17.694 CC module/bdev/delay/vbdev_delay.o 00:03:17.694 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:17.694 CC module/blobfs/bdev/blobfs_bdev.o 00:03:17.694 LIB libspdk_fsdev_aio.a 00:03:17.694 LIB libspdk_sock_posix.a 00:03:17.694 LIB libspdk_scheduler_gscheduler.a 00:03:17.694 SO libspdk_fsdev_aio.so.1.0 00:03:17.694 CC module/bdev/error/vbdev_error.o 00:03:17.694 SO libspdk_sock_posix.so.6.0 00:03:17.694 SO libspdk_scheduler_gscheduler.so.4.0 00:03:17.694 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:17.953 CC module/bdev/error/vbdev_error_rpc.o 00:03:17.953 LIB libspdk_accel_iaa.a 00:03:17.953 SYMLINK libspdk_fsdev_aio.so 00:03:17.953 SYMLINK libspdk_scheduler_gscheduler.so 00:03:17.953 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:17.953 SYMLINK libspdk_sock_posix.so 00:03:17.953 SO libspdk_accel_iaa.so.3.0 00:03:17.953 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.953 SYMLINK libspdk_accel_iaa.so 00:03:17.953 CC module/bdev/gpt/gpt.o 00:03:17.953 CC module/bdev/lvol/vbdev_lvol.o 00:03:17.953 CC module/bdev/malloc/bdev_malloc.o 00:03:17.953 LIB libspdk_blobfs_bdev.a 00:03:17.953 LIB libspdk_bdev_error.a 00:03:18.211 SO libspdk_blobfs_bdev.so.6.0 00:03:18.211 SO libspdk_bdev_error.so.6.0 00:03:18.211 LIB libspdk_bdev_delay.a 00:03:18.211 CC module/bdev/gpt/vbdev_gpt.o 00:03:18.211 CC module/bdev/null/bdev_null.o 00:03:18.211 SO libspdk_bdev_delay.so.6.0 00:03:18.211 SYMLINK libspdk_blobfs_bdev.so 00:03:18.211 SYMLINK libspdk_bdev_error.so 00:03:18.211 CC module/bdev/nvme/bdev_nvme.o 00:03:18.211 CC module/bdev/null/bdev_null_rpc.o 00:03:18.211 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:18.211 SYMLINK libspdk_bdev_delay.so 00:03:18.211 CC module/bdev/nvme/nvme_rpc.o 00:03:18.211 LIB libspdk_sock_uring.a 00:03:18.470 SO libspdk_sock_uring.so.5.0 00:03:18.470 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:18.470 CC module/bdev/passthru/vbdev_passthru.o 00:03:18.470 LIB libspdk_bdev_null.a 00:03:18.470 LIB libspdk_bdev_gpt.a 00:03:18.470 SYMLINK libspdk_sock_uring.so 00:03:18.470 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:18.470 SO libspdk_bdev_null.so.6.0 00:03:18.470 SO libspdk_bdev_gpt.so.6.0 00:03:18.470 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:18.470 SYMLINK libspdk_bdev_gpt.so 00:03:18.470 SYMLINK libspdk_bdev_null.so 00:03:18.470 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.470 LIB libspdk_bdev_malloc.a 00:03:18.470 CC module/bdev/nvme/vbdev_opal.o 00:03:18.470 SO libspdk_bdev_malloc.so.6.0 00:03:18.729 CC module/bdev/raid/bdev_raid.o 00:03:18.729 CC module/bdev/split/vbdev_split.o 00:03:18.729 SYMLINK libspdk_bdev_malloc.so 00:03:18.729 CC module/bdev/split/vbdev_split_rpc.o 00:03:18.729 LIB libspdk_bdev_passthru.a 00:03:18.729 SO libspdk_bdev_passthru.so.6.0 00:03:18.729 LIB libspdk_bdev_lvol.a 00:03:18.729 SYMLINK libspdk_bdev_passthru.so 00:03:18.729 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.729 CC module/bdev/raid/bdev_raid_rpc.o 00:03:18.729 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:18.729 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:18.729 SO libspdk_bdev_lvol.so.6.0 00:03:18.729 CC module/bdev/uring/bdev_uring.o 00:03:18.729 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.987 LIB libspdk_bdev_split.a 00:03:18.987 SYMLINK libspdk_bdev_lvol.so 00:03:18.987 SO libspdk_bdev_split.so.6.0 00:03:18.987 SYMLINK libspdk_bdev_split.so 00:03:18.987 CC module/bdev/uring/bdev_uring_rpc.o 00:03:18.987 CC module/bdev/raid/bdev_raid_sb.o 00:03:18.987 CC module/bdev/aio/bdev_aio.o 00:03:19.245 CC module/bdev/ftl/bdev_ftl.o 00:03:19.245 LIB libspdk_bdev_zone_block.a 00:03:19.245 CC module/bdev/iscsi/bdev_iscsi.o 00:03:19.245 SO libspdk_bdev_zone_block.so.6.0 00:03:19.245 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:19.245 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:19.245 LIB libspdk_bdev_uring.a 00:03:19.245 SO libspdk_bdev_uring.so.6.0 00:03:19.245 SYMLINK libspdk_bdev_zone_block.so 00:03:19.245 CC module/bdev/aio/bdev_aio_rpc.o 00:03:19.245 SYMLINK libspdk_bdev_uring.so 00:03:19.245 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:19.245 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:19.504 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:19.504 CC module/bdev/raid/raid0.o 00:03:19.504 CC module/bdev/raid/raid1.o 00:03:19.504 LIB libspdk_bdev_aio.a 00:03:19.504 SO libspdk_bdev_aio.so.6.0 00:03:19.504 SYMLINK libspdk_bdev_aio.so 00:03:19.504 CC module/bdev/raid/concat.o 00:03:19.504 LIB libspdk_bdev_iscsi.a 00:03:19.763 LIB libspdk_bdev_ftl.a 00:03:19.763 SO libspdk_bdev_iscsi.so.6.0 00:03:19.763 SO libspdk_bdev_ftl.so.6.0 00:03:19.763 SYMLINK libspdk_bdev_iscsi.so 00:03:19.763 LIB libspdk_bdev_virtio.a 00:03:19.763 SYMLINK libspdk_bdev_ftl.so 00:03:19.763 SO libspdk_bdev_virtio.so.6.0 00:03:19.763 LIB libspdk_bdev_raid.a 00:03:19.763 SYMLINK libspdk_bdev_virtio.so 00:03:20.022 SO libspdk_bdev_raid.so.6.0 00:03:20.022 SYMLINK libspdk_bdev_raid.so 00:03:20.589 LIB libspdk_bdev_nvme.a 00:03:20.847 SO libspdk_bdev_nvme.so.7.1 00:03:20.847 SYMLINK libspdk_bdev_nvme.so 00:03:21.414 CC module/event/subsystems/fsdev/fsdev.o 00:03:21.414 CC module/event/subsystems/sock/sock.o 00:03:21.414 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.414 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.414 CC module/event/subsystems/vmd/vmd.o 00:03:21.414 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.414 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.414 CC module/event/subsystems/keyring/keyring.o 00:03:21.414 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.414 LIB libspdk_event_scheduler.a 00:03:21.414 LIB libspdk_event_fsdev.a 00:03:21.414 LIB libspdk_event_vhost_blk.a 00:03:21.414 LIB libspdk_event_keyring.a 00:03:21.414 LIB libspdk_event_vmd.a 00:03:21.414 LIB libspdk_event_sock.a 00:03:21.414 LIB libspdk_event_iobuf.a 00:03:21.414 SO libspdk_event_fsdev.so.1.0 00:03:21.414 SO libspdk_event_vhost_blk.so.3.0 00:03:21.414 SO libspdk_event_scheduler.so.4.0 00:03:21.414 SO libspdk_event_keyring.so.1.0 00:03:21.414 SO libspdk_event_sock.so.5.0 00:03:21.414 SO libspdk_event_vmd.so.6.0 00:03:21.414 SO libspdk_event_iobuf.so.3.0 00:03:21.414 SYMLINK libspdk_event_fsdev.so 00:03:21.414 SYMLINK libspdk_event_vhost_blk.so 00:03:21.414 SYMLINK libspdk_event_scheduler.so 00:03:21.414 SYMLINK libspdk_event_keyring.so 00:03:21.414 SYMLINK libspdk_event_sock.so 00:03:21.672 SYMLINK libspdk_event_iobuf.so 00:03:21.672 SYMLINK libspdk_event_vmd.so 00:03:21.672 CC module/event/subsystems/accel/accel.o 00:03:21.930 LIB libspdk_event_accel.a 00:03:21.930 SO libspdk_event_accel.so.6.0 00:03:21.930 SYMLINK libspdk_event_accel.so 00:03:22.188 CC module/event/subsystems/bdev/bdev.o 00:03:22.445 LIB libspdk_event_bdev.a 00:03:22.445 SO libspdk_event_bdev.so.6.0 00:03:22.445 SYMLINK libspdk_event_bdev.so 00:03:22.704 CC module/event/subsystems/nbd/nbd.o 00:03:22.704 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:22.704 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:22.704 CC module/event/subsystems/scsi/scsi.o 00:03:22.704 CC module/event/subsystems/ublk/ublk.o 00:03:22.962 LIB libspdk_event_nbd.a 00:03:22.962 LIB libspdk_event_ublk.a 00:03:22.962 SO libspdk_event_nbd.so.6.0 00:03:22.962 LIB libspdk_event_scsi.a 00:03:22.962 SO libspdk_event_ublk.so.3.0 00:03:22.962 SO libspdk_event_scsi.so.6.0 00:03:22.962 SYMLINK libspdk_event_nbd.so 00:03:22.962 SYMLINK libspdk_event_ublk.so 00:03:22.962 SYMLINK libspdk_event_scsi.so 00:03:22.962 LIB libspdk_event_nvmf.a 00:03:23.221 SO libspdk_event_nvmf.so.6.0 00:03:23.221 SYMLINK libspdk_event_nvmf.so 00:03:23.221 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:23.221 CC module/event/subsystems/iscsi/iscsi.o 00:03:23.479 LIB libspdk_event_vhost_scsi.a 00:03:23.479 LIB libspdk_event_iscsi.a 00:03:23.479 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.479 SO libspdk_event_iscsi.so.6.0 00:03:23.479 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.738 SYMLINK libspdk_event_iscsi.so 00:03:23.738 SO libspdk.so.6.0 00:03:23.738 SYMLINK libspdk.so 00:03:23.996 CC app/trace_record/trace_record.o 00:03:23.996 CC app/spdk_nvme_identify/identify.o 00:03:23.996 CXX app/trace/trace.o 00:03:23.996 CC app/spdk_nvme_perf/perf.o 00:03:23.996 CC app/spdk_lspci/spdk_lspci.o 00:03:23.996 CC app/iscsi_tgt/iscsi_tgt.o 00:03:23.996 CC app/nvmf_tgt/nvmf_main.o 00:03:23.996 CC app/spdk_tgt/spdk_tgt.o 00:03:24.254 CC examples/util/zipf/zipf.o 00:03:24.254 CC test/thread/poller_perf/poller_perf.o 00:03:24.254 LINK spdk_lspci 00:03:24.254 LINK zipf 00:03:24.254 LINK spdk_trace_record 00:03:24.254 LINK nvmf_tgt 00:03:24.254 LINK poller_perf 00:03:24.254 LINK iscsi_tgt 00:03:24.513 LINK spdk_tgt 00:03:24.513 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.513 LINK spdk_trace 00:03:24.771 CC examples/ioat/perf/perf.o 00:03:24.771 TEST_HEADER include/spdk/accel.h 00:03:24.771 TEST_HEADER include/spdk/accel_module.h 00:03:24.771 TEST_HEADER include/spdk/assert.h 00:03:24.771 TEST_HEADER include/spdk/barrier.h 00:03:24.771 TEST_HEADER include/spdk/base64.h 00:03:24.771 TEST_HEADER include/spdk/bdev.h 00:03:24.771 TEST_HEADER include/spdk/bdev_module.h 00:03:24.771 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.771 LINK spdk_nvme_discover 00:03:24.771 CC app/spdk_top/spdk_top.o 00:03:24.771 TEST_HEADER include/spdk/bit_array.h 00:03:24.771 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.771 TEST_HEADER include/spdk/bit_pool.h 00:03:24.771 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.771 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.771 TEST_HEADER include/spdk/blobfs.h 00:03:24.771 TEST_HEADER include/spdk/blob.h 00:03:24.771 TEST_HEADER include/spdk/conf.h 00:03:24.771 TEST_HEADER include/spdk/config.h 00:03:24.771 CC test/dma/test_dma/test_dma.o 00:03:24.771 TEST_HEADER include/spdk/cpuset.h 00:03:24.771 TEST_HEADER include/spdk/crc16.h 00:03:24.771 TEST_HEADER include/spdk/crc32.h 00:03:24.771 TEST_HEADER include/spdk/crc64.h 00:03:24.771 TEST_HEADER include/spdk/dif.h 00:03:24.771 TEST_HEADER include/spdk/dma.h 00:03:24.771 TEST_HEADER include/spdk/endian.h 00:03:24.771 TEST_HEADER include/spdk/env_dpdk.h 00:03:24.771 TEST_HEADER include/spdk/env.h 00:03:24.771 TEST_HEADER include/spdk/event.h 00:03:24.771 TEST_HEADER include/spdk/fd_group.h 00:03:24.771 TEST_HEADER include/spdk/fd.h 00:03:24.771 TEST_HEADER include/spdk/file.h 00:03:24.771 TEST_HEADER include/spdk/fsdev.h 00:03:24.771 TEST_HEADER include/spdk/fsdev_module.h 00:03:24.771 TEST_HEADER include/spdk/ftl.h 00:03:24.771 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:24.771 TEST_HEADER include/spdk/gpt_spec.h 00:03:24.771 TEST_HEADER include/spdk/hexlify.h 00:03:24.771 TEST_HEADER include/spdk/histogram_data.h 00:03:24.771 TEST_HEADER include/spdk/idxd.h 00:03:24.771 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.771 TEST_HEADER include/spdk/init.h 00:03:24.771 TEST_HEADER include/spdk/ioat.h 00:03:24.771 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.771 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.771 TEST_HEADER include/spdk/json.h 00:03:24.772 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.772 TEST_HEADER include/spdk/keyring.h 00:03:24.772 TEST_HEADER include/spdk/keyring_module.h 00:03:24.772 TEST_HEADER include/spdk/likely.h 00:03:24.772 TEST_HEADER include/spdk/log.h 00:03:24.772 TEST_HEADER include/spdk/lvol.h 00:03:24.772 TEST_HEADER include/spdk/md5.h 00:03:24.772 TEST_HEADER include/spdk/memory.h 00:03:24.772 CC test/app/bdev_svc/bdev_svc.o 00:03:24.772 CC examples/vmd/led/led.o 00:03:24.772 TEST_HEADER include/spdk/mmio.h 00:03:24.772 TEST_HEADER include/spdk/nbd.h 00:03:24.772 TEST_HEADER include/spdk/net.h 00:03:24.772 TEST_HEADER include/spdk/notify.h 00:03:24.772 TEST_HEADER include/spdk/nvme.h 00:03:24.772 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.772 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.772 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.772 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.772 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.772 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:24.772 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:24.772 TEST_HEADER include/spdk/nvmf.h 00:03:24.772 TEST_HEADER include/spdk/nvmf_spec.h 00:03:24.772 TEST_HEADER include/spdk/nvmf_transport.h 00:03:24.772 LINK lsvmd 00:03:24.772 TEST_HEADER include/spdk/opal.h 00:03:24.772 TEST_HEADER include/spdk/opal_spec.h 00:03:24.772 TEST_HEADER include/spdk/pci_ids.h 00:03:24.772 TEST_HEADER include/spdk/pipe.h 00:03:24.772 TEST_HEADER include/spdk/queue.h 00:03:24.772 TEST_HEADER include/spdk/reduce.h 00:03:24.772 TEST_HEADER include/spdk/rpc.h 00:03:24.772 TEST_HEADER include/spdk/scheduler.h 00:03:24.772 TEST_HEADER include/spdk/scsi.h 00:03:24.772 TEST_HEADER include/spdk/scsi_spec.h 00:03:24.772 TEST_HEADER include/spdk/sock.h 00:03:24.772 TEST_HEADER include/spdk/stdinc.h 00:03:24.772 TEST_HEADER include/spdk/string.h 00:03:24.772 TEST_HEADER include/spdk/thread.h 00:03:24.772 TEST_HEADER include/spdk/trace.h 00:03:24.772 TEST_HEADER include/spdk/trace_parser.h 00:03:24.772 LINK ioat_perf 00:03:25.032 TEST_HEADER include/spdk/tree.h 00:03:25.032 TEST_HEADER include/spdk/ublk.h 00:03:25.032 TEST_HEADER include/spdk/util.h 00:03:25.032 TEST_HEADER include/spdk/uuid.h 00:03:25.032 TEST_HEADER include/spdk/version.h 00:03:25.032 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.032 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.032 TEST_HEADER include/spdk/vhost.h 00:03:25.032 TEST_HEADER include/spdk/vmd.h 00:03:25.032 LINK spdk_nvme_identify 00:03:25.032 TEST_HEADER include/spdk/xor.h 00:03:25.032 TEST_HEADER include/spdk/zipf.h 00:03:25.032 CXX test/cpp_headers/accel.o 00:03:25.032 CC examples/ioat/verify/verify.o 00:03:25.032 LINK spdk_nvme_perf 00:03:25.032 LINK bdev_svc 00:03:25.032 LINK led 00:03:25.032 CXX test/cpp_headers/accel_module.o 00:03:25.291 LINK verify 00:03:25.291 CXX test/cpp_headers/assert.o 00:03:25.291 CC app/spdk_dd/spdk_dd.o 00:03:25.291 LINK test_dma 00:03:25.291 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.291 CC examples/idxd/perf/perf.o 00:03:25.291 CC app/fio/nvme/fio_plugin.o 00:03:25.291 CC examples/thread/thread/thread_ex.o 00:03:25.291 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.550 CXX test/cpp_headers/barrier.o 00:03:25.550 LINK interrupt_tgt 00:03:25.550 CC examples/sock/hello_world/hello_sock.o 00:03:25.550 CC test/app/histogram_perf/histogram_perf.o 00:03:25.550 CXX test/cpp_headers/base64.o 00:03:25.550 LINK spdk_top 00:03:25.550 LINK idxd_perf 00:03:25.550 LINK thread 00:03:25.809 LINK histogram_perf 00:03:25.809 LINK spdk_dd 00:03:25.809 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.809 CXX test/cpp_headers/bdev.o 00:03:25.809 LINK nvme_fuzz 00:03:25.809 LINK hello_sock 00:03:25.809 CC app/fio/bdev/fio_plugin.o 00:03:26.067 CC app/vhost/vhost.o 00:03:26.067 CXX test/cpp_headers/bdev_module.o 00:03:26.067 LINK spdk_nvme 00:03:26.067 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.067 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.067 CXX test/cpp_headers/bdev_zone.o 00:03:26.067 CXX test/cpp_headers/bit_array.o 00:03:26.067 CXX test/cpp_headers/bit_pool.o 00:03:26.068 CC examples/accel/perf/accel_perf.o 00:03:26.068 CXX test/cpp_headers/blob_bdev.o 00:03:26.068 LINK vhost 00:03:26.068 CXX test/cpp_headers/blobfs_bdev.o 00:03:26.068 CXX test/cpp_headers/blobfs.o 00:03:26.326 CXX test/cpp_headers/blob.o 00:03:26.326 CXX test/cpp_headers/conf.o 00:03:26.326 CXX test/cpp_headers/config.o 00:03:26.326 CXX test/cpp_headers/cpuset.o 00:03:26.326 CXX test/cpp_headers/crc16.o 00:03:26.326 LINK vhost_fuzz 00:03:26.326 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.326 LINK spdk_bdev 00:03:26.585 CXX test/cpp_headers/crc32.o 00:03:26.585 LINK accel_perf 00:03:26.585 CC test/event/event_perf/event_perf.o 00:03:26.585 CC examples/blob/hello_world/hello_blob.o 00:03:26.585 CC examples/blob/cli/blobcli.o 00:03:26.585 CC test/event/reactor/reactor.o 00:03:26.585 CC test/event/reactor_perf/reactor_perf.o 00:03:26.585 CC test/event/app_repeat/app_repeat.o 00:03:26.585 LINK event_perf 00:03:26.844 CXX test/cpp_headers/crc64.o 00:03:26.844 LINK reactor_perf 00:03:26.844 LINK reactor 00:03:26.844 CXX test/cpp_headers/dif.o 00:03:26.844 CC test/event/scheduler/scheduler.o 00:03:26.844 LINK hello_blob 00:03:26.844 LINK app_repeat 00:03:26.844 CXX test/cpp_headers/dma.o 00:03:27.103 CC test/app/jsoncat/jsoncat.o 00:03:27.103 CXX test/cpp_headers/endian.o 00:03:27.103 CXX test/cpp_headers/env_dpdk.o 00:03:27.103 LINK mem_callbacks 00:03:27.103 LINK blobcli 00:03:27.103 LINK scheduler 00:03:27.103 CC test/nvme/aer/aer.o 00:03:27.103 LINK jsoncat 00:03:27.103 CC test/nvme/reset/reset.o 00:03:27.103 CC test/app/stub/stub.o 00:03:27.103 CXX test/cpp_headers/env.o 00:03:27.362 CC test/env/vtophys/vtophys.o 00:03:27.362 CC test/rpc_client/rpc_client_test.o 00:03:27.362 CXX test/cpp_headers/event.o 00:03:27.362 LINK stub 00:03:27.362 CXX test/cpp_headers/fd_group.o 00:03:27.362 CC test/nvme/sgl/sgl.o 00:03:27.362 LINK iscsi_fuzz 00:03:27.362 LINK aer 00:03:27.362 LINK vtophys 00:03:27.362 LINK reset 00:03:27.362 CC examples/nvme/hello_world/hello_world.o 00:03:27.362 LINK rpc_client_test 00:03:27.621 CXX test/cpp_headers/fd.o 00:03:27.621 CC examples/nvme/reconnect/reconnect.o 00:03:27.621 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.621 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.621 CC examples/nvme/arbitration/arbitration.o 00:03:27.621 LINK sgl 00:03:27.621 CC examples/nvme/hotplug/hotplug.o 00:03:27.621 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:27.621 LINK hello_world 00:03:27.880 CXX test/cpp_headers/file.o 00:03:27.880 CC test/accel/dif/dif.o 00:03:27.880 LINK env_dpdk_post_init 00:03:27.880 LINK cmb_copy 00:03:27.880 CC test/nvme/e2edp/nvme_dp.o 00:03:27.880 CXX test/cpp_headers/fsdev.o 00:03:27.880 LINK reconnect 00:03:27.880 CC examples/nvme/abort/abort.o 00:03:27.880 LINK hotplug 00:03:28.139 LINK arbitration 00:03:28.139 CC test/env/memory/memory_ut.o 00:03:28.139 CXX test/cpp_headers/fsdev_module.o 00:03:28.139 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.139 LINK nvme_manage 00:03:28.139 CC test/nvme/overhead/overhead.o 00:03:28.139 LINK nvme_dp 00:03:28.398 CC test/nvme/err_injection/err_injection.o 00:03:28.398 CXX test/cpp_headers/ftl.o 00:03:28.398 LINK pmr_persistence 00:03:28.398 LINK abort 00:03:28.398 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:28.398 CXX test/cpp_headers/fuse_dispatcher.o 00:03:28.398 CC test/nvme/startup/startup.o 00:03:28.398 LINK dif 00:03:28.655 LINK overhead 00:03:28.655 LINK err_injection 00:03:28.655 CXX test/cpp_headers/gpt_spec.o 00:03:28.655 CXX test/cpp_headers/hexlify.o 00:03:28.655 LINK startup 00:03:28.655 LINK hello_fsdev 00:03:28.655 CXX test/cpp_headers/histogram_data.o 00:03:28.655 CC examples/bdev/hello_world/hello_bdev.o 00:03:28.655 CC test/blobfs/mkfs/mkfs.o 00:03:28.655 CXX test/cpp_headers/idxd.o 00:03:28.913 CC test/env/pci/pci_ut.o 00:03:28.914 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.914 CC test/nvme/reserve/reserve.o 00:03:28.914 CXX test/cpp_headers/idxd_spec.o 00:03:28.914 CC test/nvme/simple_copy/simple_copy.o 00:03:28.914 CC test/nvme/connect_stress/connect_stress.o 00:03:28.914 LINK hello_bdev 00:03:28.914 LINK mkfs 00:03:29.172 LINK reserve 00:03:29.172 CC test/lvol/esnap/esnap.o 00:03:29.172 CXX test/cpp_headers/init.o 00:03:29.172 LINK pci_ut 00:03:29.172 CXX test/cpp_headers/ioat.o 00:03:29.172 LINK connect_stress 00:03:29.172 CXX test/cpp_headers/ioat_spec.o 00:03:29.172 LINK simple_copy 00:03:29.431 LINK memory_ut 00:03:29.431 CXX test/cpp_headers/iscsi_spec.o 00:03:29.431 CXX test/cpp_headers/json.o 00:03:29.431 CC test/nvme/boot_partition/boot_partition.o 00:03:29.431 CXX test/cpp_headers/jsonrpc.o 00:03:29.431 CC test/bdev/bdevio/bdevio.o 00:03:29.431 CC test/nvme/compliance/nvme_compliance.o 00:03:29.431 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.431 CXX test/cpp_headers/keyring.o 00:03:29.690 CXX test/cpp_headers/keyring_module.o 00:03:29.690 LINK boot_partition 00:03:29.690 LINK bdevperf 00:03:29.690 CXX test/cpp_headers/likely.o 00:03:29.690 CC test/nvme/fdp/fdp.o 00:03:29.690 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.690 LINK fused_ordering 00:03:29.690 CXX test/cpp_headers/log.o 00:03:29.690 LINK nvme_compliance 00:03:29.948 CXX test/cpp_headers/lvol.o 00:03:29.948 LINK bdevio 00:03:29.948 CXX test/cpp_headers/md5.o 00:03:29.948 CC test/nvme/cuse/cuse.o 00:03:29.948 LINK doorbell_aers 00:03:29.948 CXX test/cpp_headers/memory.o 00:03:29.948 CXX test/cpp_headers/mmio.o 00:03:29.948 LINK fdp 00:03:29.948 CXX test/cpp_headers/nbd.o 00:03:29.948 CC examples/nvmf/nvmf/nvmf.o 00:03:29.948 CXX test/cpp_headers/net.o 00:03:29.948 CXX test/cpp_headers/notify.o 00:03:29.948 CXX test/cpp_headers/nvme.o 00:03:30.206 CXX test/cpp_headers/nvme_intel.o 00:03:30.206 CXX test/cpp_headers/nvme_ocssd.o 00:03:30.206 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:30.206 CXX test/cpp_headers/nvme_spec.o 00:03:30.206 CXX test/cpp_headers/nvme_zns.o 00:03:30.206 CXX test/cpp_headers/nvmf_cmd.o 00:03:30.206 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:30.206 CXX test/cpp_headers/nvmf.o 00:03:30.206 CXX test/cpp_headers/nvmf_spec.o 00:03:30.206 CXX test/cpp_headers/nvmf_transport.o 00:03:30.465 CXX test/cpp_headers/opal.o 00:03:30.465 LINK nvmf 00:03:30.465 CXX test/cpp_headers/opal_spec.o 00:03:30.465 CXX test/cpp_headers/pci_ids.o 00:03:30.465 CXX test/cpp_headers/pipe.o 00:03:30.465 CXX test/cpp_headers/queue.o 00:03:30.465 CXX test/cpp_headers/reduce.o 00:03:30.465 CXX test/cpp_headers/rpc.o 00:03:30.465 CXX test/cpp_headers/scheduler.o 00:03:30.465 CXX test/cpp_headers/scsi.o 00:03:30.465 CXX test/cpp_headers/scsi_spec.o 00:03:30.465 CXX test/cpp_headers/sock.o 00:03:30.723 CXX test/cpp_headers/stdinc.o 00:03:30.723 CXX test/cpp_headers/string.o 00:03:30.723 CXX test/cpp_headers/thread.o 00:03:30.723 CXX test/cpp_headers/trace.o 00:03:30.723 CXX test/cpp_headers/trace_parser.o 00:03:30.723 CXX test/cpp_headers/tree.o 00:03:30.723 CXX test/cpp_headers/ublk.o 00:03:30.723 CXX test/cpp_headers/util.o 00:03:30.723 CXX test/cpp_headers/uuid.o 00:03:30.723 CXX test/cpp_headers/version.o 00:03:30.723 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.723 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.723 CXX test/cpp_headers/vhost.o 00:03:30.982 CXX test/cpp_headers/vmd.o 00:03:30.982 CXX test/cpp_headers/xor.o 00:03:30.982 CXX test/cpp_headers/zipf.o 00:03:31.241 LINK cuse 00:03:33.771 LINK esnap 00:03:34.030 ************************************ 00:03:34.030 END TEST make 00:03:34.030 ************************************ 00:03:34.030 00:03:34.030 real 1m24.633s 00:03:34.030 user 8m2.592s 00:03:34.030 sys 1m30.932s 00:03:34.030 12:11:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.030 12:11:20 make -- common/autotest_common.sh@10 -- $ set +x 00:03:34.030 12:11:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:34.030 12:11:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:34.030 12:11:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:34.030 12:11:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.030 12:11:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:34.030 12:11:20 -- pm/common@44 -- $ pid=5295 00:03:34.030 12:11:20 -- pm/common@50 -- $ kill -TERM 5295 00:03:34.030 12:11:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.030 12:11:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:34.030 12:11:20 -- pm/common@44 -- $ pid=5297 00:03:34.030 12:11:20 -- pm/common@50 -- $ kill -TERM 5297 00:03:34.030 12:11:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:34.030 12:11:20 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:34.289 12:11:20 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:34.289 12:11:20 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:34.289 12:11:20 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:34.289 12:11:20 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:34.289 12:11:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.289 12:11:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.289 12:11:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.289 12:11:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.289 12:11:20 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.289 12:11:20 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.289 12:11:20 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.289 12:11:20 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.289 12:11:20 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.289 12:11:20 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.289 12:11:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.289 12:11:20 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.289 12:11:20 -- scripts/common.sh@345 -- # : 1 00:03:34.289 12:11:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.289 12:11:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.289 12:11:20 -- scripts/common.sh@365 -- # decimal 1 00:03:34.289 12:11:20 -- scripts/common.sh@353 -- # local d=1 00:03:34.289 12:11:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.289 12:11:20 -- scripts/common.sh@355 -- # echo 1 00:03:34.289 12:11:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.289 12:11:20 -- scripts/common.sh@366 -- # decimal 2 00:03:34.289 12:11:20 -- scripts/common.sh@353 -- # local d=2 00:03:34.289 12:11:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.289 12:11:20 -- scripts/common.sh@355 -- # echo 2 00:03:34.289 12:11:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.289 12:11:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.289 12:11:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.289 12:11:20 -- scripts/common.sh@368 -- # return 0 00:03:34.289 12:11:20 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.289 12:11:20 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:34.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.289 --rc genhtml_branch_coverage=1 00:03:34.289 --rc genhtml_function_coverage=1 00:03:34.289 --rc genhtml_legend=1 00:03:34.289 --rc geninfo_all_blocks=1 00:03:34.289 --rc geninfo_unexecuted_blocks=1 00:03:34.289 00:03:34.289 ' 00:03:34.289 12:11:20 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:34.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.289 --rc genhtml_branch_coverage=1 00:03:34.289 --rc genhtml_function_coverage=1 00:03:34.289 --rc genhtml_legend=1 00:03:34.289 --rc geninfo_all_blocks=1 00:03:34.289 --rc geninfo_unexecuted_blocks=1 00:03:34.289 00:03:34.289 ' 00:03:34.289 12:11:20 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:34.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.289 --rc genhtml_branch_coverage=1 00:03:34.289 --rc genhtml_function_coverage=1 00:03:34.289 --rc genhtml_legend=1 00:03:34.289 --rc geninfo_all_blocks=1 00:03:34.289 --rc geninfo_unexecuted_blocks=1 00:03:34.289 00:03:34.289 ' 00:03:34.289 12:11:20 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:34.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.289 --rc genhtml_branch_coverage=1 00:03:34.289 --rc genhtml_function_coverage=1 00:03:34.289 --rc genhtml_legend=1 00:03:34.289 --rc geninfo_all_blocks=1 00:03:34.289 --rc geninfo_unexecuted_blocks=1 00:03:34.289 00:03:34.289 ' 00:03:34.289 12:11:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:34.289 12:11:20 -- nvmf/common.sh@7 -- # uname -s 00:03:34.289 12:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.289 12:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.289 12:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.289 12:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.289 12:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.289 12:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.289 12:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.289 12:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.289 12:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.289 12:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.289 12:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:03:34.289 12:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:03:34.289 12:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.290 12:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.290 12:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:34.290 12:11:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.290 12:11:20 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.290 12:11:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.290 12:11:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.290 12:11:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.290 12:11:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.290 12:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.290 12:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.290 12:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.290 12:11:20 -- paths/export.sh@5 -- # export PATH 00:03:34.290 12:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.290 12:11:20 -- nvmf/common.sh@51 -- # : 0 00:03:34.290 12:11:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:34.290 12:11:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:34.290 12:11:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.290 12:11:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.290 12:11:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.290 12:11:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:34.290 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:34.290 12:11:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.290 12:11:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.290 12:11:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.290 12:11:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.290 12:11:20 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.290 12:11:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.290 12:11:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.290 12:11:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.290 12:11:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.290 12:11:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.290 12:11:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.290 12:11:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.290 12:11:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.290 12:11:20 -- spdk/autotest.sh@48 -- # udevadm_pid=54363 00:03:34.290 12:11:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.290 12:11:20 -- pm/common@17 -- # local monitor 00:03:34.290 12:11:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.290 12:11:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.290 12:11:20 -- pm/common@25 -- # sleep 1 00:03:34.290 12:11:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.290 12:11:20 -- pm/common@21 -- # date +%s 00:03:34.290 12:11:20 -- pm/common@21 -- # date +%s 00:03:34.290 12:11:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733487080 00:03:34.290 12:11:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733487080 00:03:34.549 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733487080_collect-cpu-load.pm.log 00:03:34.549 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733487080_collect-vmstat.pm.log 00:03:35.486 12:11:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.486 12:11:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.486 12:11:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.486 12:11:21 -- common/autotest_common.sh@10 -- # set +x 00:03:35.486 12:11:21 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.486 12:11:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:35.486 12:11:21 -- common/autotest_common.sh@10 -- # set +x 00:03:35.486 12:11:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:35.486 12:11:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:35.486 12:11:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:35.486 12:11:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.486 12:11:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:35.486 12:11:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.486 12:11:21 -- common/autotest_common.sh@1457 -- # uname 00:03:35.486 12:11:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:35.486 12:11:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.486 12:11:21 -- common/autotest_common.sh@1477 -- # uname 00:03:35.486 12:11:22 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:35.486 12:11:22 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.486 12:11:22 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.486 lcov: LCOV version 1.15 00:03:35.486 12:11:22 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:50.417 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:50.417 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:02.623 12:11:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:02.623 12:11:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.623 12:11:49 -- common/autotest_common.sh@10 -- # set +x 00:04:02.623 12:11:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:02.623 12:11:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.450 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:03.450 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:03.450 12:11:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.450 12:11:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:03.450 12:11:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:03.450 12:11:49 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:03.450 12:11:49 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:03.450 12:11:49 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:03.450 12:11:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.450 12:11:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:03.450 12:11:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.450 12:11:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:03.450 12:11:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:03.450 12:11:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.450 12:11:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:03.450 12:11:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.450 12:11:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:03.450 12:11:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:03.450 12:11:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.450 12:11:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:03.450 12:11:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:03.450 12:11:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.450 12:11:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:03.450 12:11:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:03.450 12:11:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:03.450 12:11:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.450 12:11:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.450 12:11:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.450 12:11:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.450 12:11:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.450 12:11:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.450 12:11:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.450 No valid GPT data, bailing 00:04:03.450 12:11:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.450 12:11:50 -- scripts/common.sh@394 -- # pt= 00:04:03.450 12:11:50 -- scripts/common.sh@395 -- # return 1 00:04:03.450 12:11:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.450 1+0 records in 00:04:03.450 1+0 records out 00:04:03.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00349899 s, 300 MB/s 00:04:03.450 12:11:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.450 12:11:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.450 12:11:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:03.450 12:11:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:03.450 12:11:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:03.450 No valid GPT data, bailing 00:04:03.450 12:11:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.450 12:11:50 -- scripts/common.sh@394 -- # pt= 00:04:03.450 12:11:50 -- scripts/common.sh@395 -- # return 1 00:04:03.450 12:11:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:03.450 1+0 records in 00:04:03.450 1+0 records out 00:04:03.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448369 s, 234 MB/s 00:04:03.450 12:11:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.450 12:11:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.450 12:11:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:03.450 12:11:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:03.450 12:11:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:03.711 No valid GPT data, bailing 00:04:03.711 12:11:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:03.711 12:11:50 -- scripts/common.sh@394 -- # pt= 00:04:03.711 12:11:50 -- scripts/common.sh@395 -- # return 1 00:04:03.711 12:11:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:03.711 1+0 records in 00:04:03.711 1+0 records out 00:04:03.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384692 s, 273 MB/s 00:04:03.711 12:11:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.711 12:11:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.711 12:11:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:03.711 12:11:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:03.711 12:11:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:03.711 No valid GPT data, bailing 00:04:03.711 12:11:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:03.711 12:11:50 -- scripts/common.sh@394 -- # pt= 00:04:03.711 12:11:50 -- scripts/common.sh@395 -- # return 1 00:04:03.711 12:11:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:03.711 1+0 records in 00:04:03.711 1+0 records out 00:04:03.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480425 s, 218 MB/s 00:04:03.711 12:11:50 -- spdk/autotest.sh@105 -- # sync 00:04:04.279 12:11:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:04.279 12:11:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:04.279 12:11:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.183 12:11:52 -- spdk/autotest.sh@111 -- # uname -s 00:04:06.183 12:11:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:06.183 12:11:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:06.183 12:11:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.751 Hugepages 00:04:06.751 node hugesize free / total 00:04:06.751 node0 1048576kB 0 / 0 00:04:06.751 node0 2048kB 0 / 0 00:04:06.751 00:04:06.751 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.009 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:07.009 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:07.009 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:07.009 12:11:53 -- spdk/autotest.sh@117 -- # uname -s 00:04:07.009 12:11:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:07.009 12:11:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:07.009 12:11:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.945 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.945 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.945 12:11:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:08.880 12:11:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:08.880 12:11:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:08.880 12:11:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:08.880 12:11:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:08.880 12:11:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.880 12:11:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.880 12:11:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:09.139 12:11:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:09.139 12:11:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:09.139 12:11:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:09.139 12:11:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:09.139 12:11:55 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.397 Waiting for block devices as requested 00:04:09.397 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.656 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.656 12:11:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.656 12:11:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:09.656 12:11:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:09.656 12:11:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:09.656 12:11:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:09.656 12:11:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.656 12:11:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.656 12:11:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:09.656 12:11:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.656 12:11:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.656 12:11:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:09.656 12:11:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.656 12:11:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.656 12:11:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.656 12:11:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.656 12:11:56 -- common/autotest_common.sh@1543 -- # continue 00:04:09.656 12:11:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.656 12:11:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.657 12:11:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:09.657 12:11:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:09.657 12:11:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:09.657 12:11:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.657 12:11:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.657 12:11:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:09.657 12:11:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.657 12:11:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.657 12:11:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.657 12:11:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:09.657 12:11:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.657 12:11:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.657 12:11:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.657 12:11:56 -- common/autotest_common.sh@1543 -- # continue 00:04:09.657 12:11:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:09.657 12:11:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.657 12:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:09.657 12:11:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:09.657 12:11:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.657 12:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:09.657 12:11:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.593 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.593 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.593 12:11:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:10.593 12:11:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.593 12:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:10.593 12:11:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:10.593 12:11:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:10.593 12:11:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:10.593 12:11:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:10.593 12:11:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:10.593 12:11:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:10.593 12:11:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:10.593 12:11:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:10.593 12:11:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.593 12:11:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.593 12:11:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.593 12:11:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.593 12:11:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.852 12:11:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:10.852 12:11:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.852 12:11:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.852 12:11:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:10.852 12:11:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:10.852 12:11:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.852 12:11:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.852 12:11:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:10.852 12:11:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:10.853 12:11:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.853 12:11:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:10.853 12:11:57 -- common/autotest_common.sh@1572 -- # return 0 00:04:10.853 12:11:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:10.853 12:11:57 -- common/autotest_common.sh@1580 -- # return 0 00:04:10.853 12:11:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:10.853 12:11:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:10.853 12:11:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.853 12:11:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.853 12:11:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:10.853 12:11:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.853 12:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:10.853 12:11:57 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:10.853 12:11:57 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:10.853 12:11:57 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:10.853 12:11:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.853 12:11:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.853 12:11:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.853 12:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:10.853 ************************************ 00:04:10.853 START TEST env 00:04:10.853 ************************************ 00:04:10.853 12:11:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.853 * Looking for test storage... 00:04:10.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:10.853 12:11:57 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.853 12:11:57 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.853 12:11:57 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.112 12:11:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.112 12:11:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.112 12:11:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.112 12:11:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.112 12:11:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.112 12:11:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.112 12:11:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.112 12:11:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.112 12:11:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.112 12:11:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.112 12:11:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.112 12:11:57 env -- scripts/common.sh@344 -- # case "$op" in 00:04:11.112 12:11:57 env -- scripts/common.sh@345 -- # : 1 00:04:11.112 12:11:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.112 12:11:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.112 12:11:57 env -- scripts/common.sh@365 -- # decimal 1 00:04:11.112 12:11:57 env -- scripts/common.sh@353 -- # local d=1 00:04:11.112 12:11:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.112 12:11:57 env -- scripts/common.sh@355 -- # echo 1 00:04:11.112 12:11:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.112 12:11:57 env -- scripts/common.sh@366 -- # decimal 2 00:04:11.112 12:11:57 env -- scripts/common.sh@353 -- # local d=2 00:04:11.112 12:11:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.112 12:11:57 env -- scripts/common.sh@355 -- # echo 2 00:04:11.112 12:11:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.112 12:11:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.112 12:11:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.112 12:11:57 env -- scripts/common.sh@368 -- # return 0 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.112 --rc genhtml_branch_coverage=1 00:04:11.112 --rc genhtml_function_coverage=1 00:04:11.112 --rc genhtml_legend=1 00:04:11.112 --rc geninfo_all_blocks=1 00:04:11.112 --rc geninfo_unexecuted_blocks=1 00:04:11.112 00:04:11.112 ' 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.112 --rc genhtml_branch_coverage=1 00:04:11.112 --rc genhtml_function_coverage=1 00:04:11.112 --rc genhtml_legend=1 00:04:11.112 --rc geninfo_all_blocks=1 00:04:11.112 --rc geninfo_unexecuted_blocks=1 00:04:11.112 00:04:11.112 ' 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.112 --rc genhtml_branch_coverage=1 00:04:11.112 --rc genhtml_function_coverage=1 00:04:11.112 --rc genhtml_legend=1 00:04:11.112 --rc geninfo_all_blocks=1 00:04:11.112 --rc geninfo_unexecuted_blocks=1 00:04:11.112 00:04:11.112 ' 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.112 --rc genhtml_branch_coverage=1 00:04:11.112 --rc genhtml_function_coverage=1 00:04:11.112 --rc genhtml_legend=1 00:04:11.112 --rc geninfo_all_blocks=1 00:04:11.112 --rc geninfo_unexecuted_blocks=1 00:04:11.112 00:04:11.112 ' 00:04:11.112 12:11:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.112 12:11:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.112 12:11:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.112 ************************************ 00:04:11.112 START TEST env_memory 00:04:11.112 ************************************ 00:04:11.112 12:11:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:11.112 00:04:11.112 00:04:11.112 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.112 http://cunit.sourceforge.net/ 00:04:11.112 00:04:11.112 00:04:11.112 Suite: memory 00:04:11.112 Test: alloc and free memory map ...[2024-12-06 12:11:57.652713] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:11.112 passed 00:04:11.112 Test: mem map translation ...[2024-12-06 12:11:57.684130] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:11.112 [2024-12-06 12:11:57.684387] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:11.112 [2024-12-06 12:11:57.684599] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:11.112 [2024-12-06 12:11:57.684757] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:11.112 passed 00:04:11.112 Test: mem map registration ...[2024-12-06 12:11:57.748785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:11.113 [2024-12-06 12:11:57.748971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:11.113 passed 00:04:11.371 Test: mem map adjacent registrations ...passed 00:04:11.371 00:04:11.371 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.371 suites 1 1 n/a 0 0 00:04:11.371 tests 4 4 4 0 0 00:04:11.371 asserts 152 152 152 0 n/a 00:04:11.371 00:04:11.371 Elapsed time = 0.214 seconds 00:04:11.371 00:04:11.371 real 0m0.233s 00:04:11.371 user 0m0.215s 00:04:11.371 sys 0m0.010s 00:04:11.371 12:11:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.371 12:11:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:11.371 ************************************ 00:04:11.371 END TEST env_memory 00:04:11.371 ************************************ 00:04:11.371 12:11:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:11.371 12:11:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.371 12:11:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.371 12:11:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.371 ************************************ 00:04:11.371 START TEST env_vtophys 00:04:11.371 ************************************ 00:04:11.371 12:11:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:11.371 EAL: lib.eal log level changed from notice to debug 00:04:11.371 EAL: Detected lcore 0 as core 0 on socket 0 00:04:11.371 EAL: Detected lcore 1 as core 0 on socket 0 00:04:11.371 EAL: Detected lcore 2 as core 0 on socket 0 00:04:11.371 EAL: Detected lcore 3 as core 0 on socket 0 00:04:11.371 EAL: Detected lcore 4 as core 0 on socket 0 00:04:11.372 EAL: Detected lcore 5 as core 0 on socket 0 00:04:11.372 EAL: Detected lcore 6 as core 0 on socket 0 00:04:11.372 EAL: Detected lcore 7 as core 0 on socket 0 00:04:11.372 EAL: Detected lcore 8 as core 0 on socket 0 00:04:11.372 EAL: Detected lcore 9 as core 0 on socket 0 00:04:11.372 EAL: Maximum logical cores by configuration: 128 00:04:11.372 EAL: Detected CPU lcores: 10 00:04:11.372 EAL: Detected NUMA nodes: 1 00:04:11.372 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:11.372 EAL: Detected shared linkage of DPDK 00:04:11.372 EAL: No shared files mode enabled, IPC will be disabled 00:04:11.372 EAL: Selected IOVA mode 'PA' 00:04:11.372 EAL: Probing VFIO support... 00:04:11.372 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:11.372 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:11.372 EAL: Ask a virtual area of 0x2e000 bytes 00:04:11.372 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:11.372 EAL: Setting up physically contiguous memory... 00:04:11.372 EAL: Setting maximum number of open files to 524288 00:04:11.372 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:11.372 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:11.372 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.372 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:11.372 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.372 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.372 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:11.372 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:11.372 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.372 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:11.372 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.372 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.372 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:11.372 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:11.372 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.372 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:11.372 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.372 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.372 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:11.372 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:11.372 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.372 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:11.372 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.372 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.372 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:11.372 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:11.372 EAL: Hugepages will be freed exactly as allocated. 00:04:11.372 EAL: No shared files mode enabled, IPC is disabled 00:04:11.372 EAL: No shared files mode enabled, IPC is disabled 00:04:11.631 EAL: TSC frequency is ~2200000 KHz 00:04:11.631 EAL: Main lcore 0 is ready (tid=7f7c96d3aa00;cpuset=[0]) 00:04:11.631 EAL: Trying to obtain current memory policy. 00:04:11.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.631 EAL: Restoring previous memory policy: 0 00:04:11.631 EAL: request: mp_malloc_sync 00:04:11.631 EAL: No shared files mode enabled, IPC is disabled 00:04:11.631 EAL: Heap on socket 0 was expanded by 2MB 00:04:11.631 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:11.632 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:11.632 EAL: Mem event callback 'spdk:(nil)' registered 00:04:11.632 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:11.632 00:04:11.632 00:04:11.632 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.632 http://cunit.sourceforge.net/ 00:04:11.632 00:04:11.632 00:04:11.632 Suite: components_suite 00:04:11.632 Test: vtophys_malloc_test ...passed 00:04:11.632 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 130MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 130MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.632 EAL: Restoring previous memory policy: 4 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was expanded by 258MB 00:04:11.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.632 EAL: request: mp_malloc_sync 00:04:11.632 EAL: No shared files mode enabled, IPC is disabled 00:04:11.632 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.632 EAL: Trying to obtain current memory policy. 00:04:11.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.890 EAL: Restoring previous memory policy: 4 00:04:11.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.890 EAL: request: mp_malloc_sync 00:04:11.890 EAL: No shared files mode enabled, IPC is disabled 00:04:11.890 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.890 EAL: request: mp_malloc_sync 00:04:11.890 EAL: No shared files mode enabled, IPC is disabled 00:04:11.891 EAL: Heap on socket 0 was shrunk by 514MB 00:04:11.891 EAL: Trying to obtain current memory policy. 00:04:11.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.148 EAL: Restoring previous memory policy: 4 00:04:12.149 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.149 EAL: request: mp_malloc_sync 00:04:12.149 EAL: No shared files mode enabled, IPC is disabled 00:04:12.149 EAL: Heap on socket 0 was expanded by 1026MB 00:04:12.149 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.149 passed 00:04:12.149 00:04:12.149 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.149 suites 1 1 n/a 0 0 00:04:12.149 tests 2 2 2 0 0 00:04:12.149 asserts 5540 5540 5540 0 n/a 00:04:12.149 00:04:12.149 Elapsed time = 0.697 seconds 00:04:12.149 EAL: request: mp_malloc_sync 00:04:12.149 EAL: No shared files mode enabled, IPC is disabled 00:04:12.149 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.149 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.149 EAL: request: mp_malloc_sync 00:04:12.149 EAL: No shared files mode enabled, IPC is disabled 00:04:12.149 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.149 EAL: No shared files mode enabled, IPC is disabled 00:04:12.149 EAL: No shared files mode enabled, IPC is disabled 00:04:12.149 EAL: No shared files mode enabled, IPC is disabled 00:04:12.407 00:04:12.407 real 0m0.907s 00:04:12.407 user 0m0.463s 00:04:12.407 sys 0m0.310s 00:04:12.407 12:11:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.407 12:11:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.407 ************************************ 00:04:12.407 END TEST env_vtophys 00:04:12.407 ************************************ 00:04:12.407 12:11:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.407 12:11:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.407 12:11:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.407 12:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.407 ************************************ 00:04:12.407 START TEST env_pci 00:04:12.407 ************************************ 00:04:12.407 12:11:58 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.407 00:04:12.407 00:04:12.407 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.407 http://cunit.sourceforge.net/ 00:04:12.407 00:04:12.407 00:04:12.407 Suite: pci 00:04:12.407 Test: pci_hook ...[2024-12-06 12:11:58.866875] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56538 has claimed it 00:04:12.407 passed 00:04:12.407 00:04:12.407 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.407 suites 1 1 n/a 0 0 00:04:12.407 tests 1 1 1 0 0 00:04:12.407 asserts 25 25 25 0 n/a 00:04:12.407 00:04:12.407 Elapsed time = 0.002 seconds 00:04:12.407 EAL: Cannot find device (10000:00:01.0) 00:04:12.407 EAL: Failed to attach device on primary process 00:04:12.407 ************************************ 00:04:12.407 END TEST env_pci 00:04:12.407 ************************************ 00:04:12.407 00:04:12.407 real 0m0.018s 00:04:12.407 user 0m0.006s 00:04:12.407 sys 0m0.012s 00:04:12.407 12:11:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.407 12:11:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.407 12:11:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.407 12:11:58 env -- env/env.sh@15 -- # uname 00:04:12.407 12:11:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.407 12:11:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.407 12:11:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.407 12:11:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:12.407 12:11:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.407 12:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.407 ************************************ 00:04:12.407 START TEST env_dpdk_post_init 00:04:12.407 ************************************ 00:04:12.407 12:11:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.407 EAL: Detected CPU lcores: 10 00:04:12.407 EAL: Detected NUMA nodes: 1 00:04:12.407 EAL: Detected shared linkage of DPDK 00:04:12.407 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.407 EAL: Selected IOVA mode 'PA' 00:04:12.665 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.665 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:12.665 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:12.665 Starting DPDK initialization... 00:04:12.665 Starting SPDK post initialization... 00:04:12.665 SPDK NVMe probe 00:04:12.665 Attaching to 0000:00:10.0 00:04:12.665 Attaching to 0000:00:11.0 00:04:12.665 Attached to 0000:00:10.0 00:04:12.665 Attached to 0000:00:11.0 00:04:12.665 Cleaning up... 00:04:12.665 00:04:12.665 real 0m0.186s 00:04:12.665 user 0m0.055s 00:04:12.665 sys 0m0.031s 00:04:12.665 12:11:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.665 12:11:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.665 ************************************ 00:04:12.665 END TEST env_dpdk_post_init 00:04:12.665 ************************************ 00:04:12.665 12:11:59 env -- env/env.sh@26 -- # uname 00:04:12.665 12:11:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.665 12:11:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.665 12:11:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.665 12:11:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.666 12:11:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.666 ************************************ 00:04:12.666 START TEST env_mem_callbacks 00:04:12.666 ************************************ 00:04:12.666 12:11:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.666 EAL: Detected CPU lcores: 10 00:04:12.666 EAL: Detected NUMA nodes: 1 00:04:12.666 EAL: Detected shared linkage of DPDK 00:04:12.666 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.666 EAL: Selected IOVA mode 'PA' 00:04:12.666 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.666 00:04:12.666 00:04:12.666 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.666 http://cunit.sourceforge.net/ 00:04:12.666 00:04:12.666 00:04:12.666 Suite: memory 00:04:12.666 Test: test ... 00:04:12.666 register 0x200000200000 2097152 00:04:12.666 malloc 3145728 00:04:12.666 register 0x200000400000 4194304 00:04:12.666 buf 0x200000500000 len 3145728 PASSED 00:04:12.666 malloc 64 00:04:12.666 buf 0x2000004fff40 len 64 PASSED 00:04:12.666 malloc 4194304 00:04:12.666 register 0x200000800000 6291456 00:04:12.666 buf 0x200000a00000 len 4194304 PASSED 00:04:12.666 free 0x200000500000 3145728 00:04:12.666 free 0x2000004fff40 64 00:04:12.666 unregister 0x200000400000 4194304 PASSED 00:04:12.666 free 0x200000a00000 4194304 00:04:12.666 unregister 0x200000800000 6291456 PASSED 00:04:12.666 malloc 8388608 00:04:12.666 register 0x200000400000 10485760 00:04:12.666 buf 0x200000600000 len 8388608 PASSED 00:04:12.666 free 0x200000600000 8388608 00:04:12.666 unregister 0x200000400000 10485760 PASSED 00:04:12.666 passed 00:04:12.666 00:04:12.666 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.666 suites 1 1 n/a 0 0 00:04:12.666 tests 1 1 1 0 0 00:04:12.666 asserts 15 15 15 0 n/a 00:04:12.666 00:04:12.666 Elapsed time = 0.008 seconds 00:04:12.666 ************************************ 00:04:12.666 END TEST env_mem_callbacks 00:04:12.666 ************************************ 00:04:12.666 00:04:12.666 real 0m0.145s 00:04:12.666 user 0m0.024s 00:04:12.666 sys 0m0.020s 00:04:12.666 12:11:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.666 12:11:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.924 00:04:12.924 real 0m2.016s 00:04:12.924 user 0m1.018s 00:04:12.924 sys 0m0.625s 00:04:12.924 12:11:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.924 12:11:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.924 ************************************ 00:04:12.924 END TEST env 00:04:12.924 ************************************ 00:04:12.924 12:11:59 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.924 12:11:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.924 12:11:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.924 12:11:59 -- common/autotest_common.sh@10 -- # set +x 00:04:12.924 ************************************ 00:04:12.924 START TEST rpc 00:04:12.924 ************************************ 00:04:12.924 12:11:59 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.924 * Looking for test storage... 00:04:12.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.925 12:11:59 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.925 12:11:59 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.925 12:11:59 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.925 12:11:59 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.925 12:11:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.925 12:11:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.925 12:11:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.925 12:11:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.925 12:11:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.925 12:11:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.925 12:11:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.925 12:11:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.925 12:11:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.925 12:11:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.925 12:11:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.925 12:11:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.925 12:11:59 rpc -- scripts/common.sh@345 -- # : 1 00:04:12.925 12:11:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.925 12:11:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.925 12:11:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.184 12:11:59 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.184 12:11:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.184 12:11:59 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.184 12:11:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.184 12:11:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.184 12:11:59 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.184 12:11:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.184 12:11:59 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.184 12:11:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.184 12:11:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.184 12:11:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.184 12:11:59 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.184 --rc genhtml_branch_coverage=1 00:04:13.184 --rc genhtml_function_coverage=1 00:04:13.184 --rc genhtml_legend=1 00:04:13.184 --rc geninfo_all_blocks=1 00:04:13.184 --rc geninfo_unexecuted_blocks=1 00:04:13.184 00:04:13.184 ' 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.184 --rc genhtml_branch_coverage=1 00:04:13.184 --rc genhtml_function_coverage=1 00:04:13.184 --rc genhtml_legend=1 00:04:13.184 --rc geninfo_all_blocks=1 00:04:13.184 --rc geninfo_unexecuted_blocks=1 00:04:13.184 00:04:13.184 ' 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.184 --rc genhtml_branch_coverage=1 00:04:13.184 --rc genhtml_function_coverage=1 00:04:13.184 --rc genhtml_legend=1 00:04:13.184 --rc geninfo_all_blocks=1 00:04:13.184 --rc geninfo_unexecuted_blocks=1 00:04:13.184 00:04:13.184 ' 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:13.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.184 --rc genhtml_branch_coverage=1 00:04:13.184 --rc genhtml_function_coverage=1 00:04:13.184 --rc genhtml_legend=1 00:04:13.184 --rc geninfo_all_blocks=1 00:04:13.184 --rc geninfo_unexecuted_blocks=1 00:04:13.184 00:04:13.184 ' 00:04:13.184 12:11:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56655 00:04:13.184 12:11:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.184 12:11:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.184 12:11:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56655 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 56655 ']' 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.184 12:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.184 [2024-12-06 12:11:59.664959] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:13.184 [2024-12-06 12:11:59.665537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56655 ] 00:04:13.184 [2024-12-06 12:11:59.809230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.184 [2024-12-06 12:11:59.838884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.184 [2024-12-06 12:11:59.838945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56655' to capture a snapshot of events at runtime. 00:04:13.184 [2024-12-06 12:11:59.838971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.184 [2024-12-06 12:11:59.838979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.184 [2024-12-06 12:11:59.839001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56655 for offline analysis/debug. 00:04:13.184 [2024-12-06 12:11:59.839413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.443 [2024-12-06 12:11:59.879130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:13.443 12:11:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.443 12:11:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.444 12:11:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.444 12:11:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.444 12:11:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.444 12:11:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.444 12:11:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.444 12:11:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.444 12:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.444 ************************************ 00:04:13.444 START TEST rpc_integrity 00:04:13.444 ************************************ 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.444 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.444 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.702 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.702 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.702 { 00:04:13.702 "name": "Malloc0", 00:04:13.702 "aliases": [ 00:04:13.702 "943ee133-bbc2-4db9-bc68-b72214a9f0cb" 00:04:13.702 ], 00:04:13.702 "product_name": "Malloc disk", 00:04:13.702 "block_size": 512, 00:04:13.702 "num_blocks": 16384, 00:04:13.702 "uuid": "943ee133-bbc2-4db9-bc68-b72214a9f0cb", 00:04:13.702 "assigned_rate_limits": { 00:04:13.702 "rw_ios_per_sec": 0, 00:04:13.702 "rw_mbytes_per_sec": 0, 00:04:13.702 "r_mbytes_per_sec": 0, 00:04:13.702 "w_mbytes_per_sec": 0 00:04:13.702 }, 00:04:13.702 "claimed": false, 00:04:13.702 "zoned": false, 00:04:13.702 "supported_io_types": { 00:04:13.702 "read": true, 00:04:13.702 "write": true, 00:04:13.702 "unmap": true, 00:04:13.702 "flush": true, 00:04:13.702 "reset": true, 00:04:13.702 "nvme_admin": false, 00:04:13.702 "nvme_io": false, 00:04:13.702 "nvme_io_md": false, 00:04:13.702 "write_zeroes": true, 00:04:13.702 "zcopy": true, 00:04:13.702 "get_zone_info": false, 00:04:13.702 "zone_management": false, 00:04:13.702 "zone_append": false, 00:04:13.702 "compare": false, 00:04:13.702 "compare_and_write": false, 00:04:13.702 "abort": true, 00:04:13.702 "seek_hole": false, 00:04:13.702 "seek_data": false, 00:04:13.702 "copy": true, 00:04:13.702 "nvme_iov_md": false 00:04:13.702 }, 00:04:13.702 "memory_domains": [ 00:04:13.702 { 00:04:13.702 "dma_device_id": "system", 00:04:13.702 "dma_device_type": 1 00:04:13.702 }, 00:04:13.702 { 00:04:13.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.702 "dma_device_type": 2 00:04:13.702 } 00:04:13.702 ], 00:04:13.702 "driver_specific": {} 00:04:13.702 } 00:04:13.702 ]' 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.703 [2024-12-06 12:12:00.177599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.703 [2024-12-06 12:12:00.177674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.703 [2024-12-06 12:12:00.177695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x221ecb0 00:04:13.703 [2024-12-06 12:12:00.177704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.703 [2024-12-06 12:12:00.179054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.703 [2024-12-06 12:12:00.179099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.703 Passthru0 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.703 { 00:04:13.703 "name": "Malloc0", 00:04:13.703 "aliases": [ 00:04:13.703 "943ee133-bbc2-4db9-bc68-b72214a9f0cb" 00:04:13.703 ], 00:04:13.703 "product_name": "Malloc disk", 00:04:13.703 "block_size": 512, 00:04:13.703 "num_blocks": 16384, 00:04:13.703 "uuid": "943ee133-bbc2-4db9-bc68-b72214a9f0cb", 00:04:13.703 "assigned_rate_limits": { 00:04:13.703 "rw_ios_per_sec": 0, 00:04:13.703 "rw_mbytes_per_sec": 0, 00:04:13.703 "r_mbytes_per_sec": 0, 00:04:13.703 "w_mbytes_per_sec": 0 00:04:13.703 }, 00:04:13.703 "claimed": true, 00:04:13.703 "claim_type": "exclusive_write", 00:04:13.703 "zoned": false, 00:04:13.703 "supported_io_types": { 00:04:13.703 "read": true, 00:04:13.703 "write": true, 00:04:13.703 "unmap": true, 00:04:13.703 "flush": true, 00:04:13.703 "reset": true, 00:04:13.703 "nvme_admin": false, 00:04:13.703 "nvme_io": false, 00:04:13.703 "nvme_io_md": false, 00:04:13.703 "write_zeroes": true, 00:04:13.703 "zcopy": true, 00:04:13.703 "get_zone_info": false, 00:04:13.703 "zone_management": false, 00:04:13.703 "zone_append": false, 00:04:13.703 "compare": false, 00:04:13.703 "compare_and_write": false, 00:04:13.703 "abort": true, 00:04:13.703 "seek_hole": false, 00:04:13.703 "seek_data": false, 00:04:13.703 "copy": true, 00:04:13.703 "nvme_iov_md": false 00:04:13.703 }, 00:04:13.703 "memory_domains": [ 00:04:13.703 { 00:04:13.703 "dma_device_id": "system", 00:04:13.703 "dma_device_type": 1 00:04:13.703 }, 00:04:13.703 { 00:04:13.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.703 "dma_device_type": 2 00:04:13.703 } 00:04:13.703 ], 00:04:13.703 "driver_specific": {} 00:04:13.703 }, 00:04:13.703 { 00:04:13.703 "name": "Passthru0", 00:04:13.703 "aliases": [ 00:04:13.703 "7dcfa72d-3953-574f-b33a-301a8d27f0d9" 00:04:13.703 ], 00:04:13.703 "product_name": "passthru", 00:04:13.703 "block_size": 512, 00:04:13.703 "num_blocks": 16384, 00:04:13.703 "uuid": "7dcfa72d-3953-574f-b33a-301a8d27f0d9", 00:04:13.703 "assigned_rate_limits": { 00:04:13.703 "rw_ios_per_sec": 0, 00:04:13.703 "rw_mbytes_per_sec": 0, 00:04:13.703 "r_mbytes_per_sec": 0, 00:04:13.703 "w_mbytes_per_sec": 0 00:04:13.703 }, 00:04:13.703 "claimed": false, 00:04:13.703 "zoned": false, 00:04:13.703 "supported_io_types": { 00:04:13.703 "read": true, 00:04:13.703 "write": true, 00:04:13.703 "unmap": true, 00:04:13.703 "flush": true, 00:04:13.703 "reset": true, 00:04:13.703 "nvme_admin": false, 00:04:13.703 "nvme_io": false, 00:04:13.703 "nvme_io_md": false, 00:04:13.703 "write_zeroes": true, 00:04:13.703 "zcopy": true, 00:04:13.703 "get_zone_info": false, 00:04:13.703 "zone_management": false, 00:04:13.703 "zone_append": false, 00:04:13.703 "compare": false, 00:04:13.703 "compare_and_write": false, 00:04:13.703 "abort": true, 00:04:13.703 "seek_hole": false, 00:04:13.703 "seek_data": false, 00:04:13.703 "copy": true, 00:04:13.703 "nvme_iov_md": false 00:04:13.703 }, 00:04:13.703 "memory_domains": [ 00:04:13.703 { 00:04:13.703 "dma_device_id": "system", 00:04:13.703 "dma_device_type": 1 00:04:13.703 }, 00:04:13.703 { 00:04:13.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.703 "dma_device_type": 2 00:04:13.703 } 00:04:13.703 ], 00:04:13.703 "driver_specific": { 00:04:13.703 "passthru": { 00:04:13.703 "name": "Passthru0", 00:04:13.703 "base_bdev_name": "Malloc0" 00:04:13.703 } 00:04:13.703 } 00:04:13.703 } 00:04:13.703 ]' 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.703 ************************************ 00:04:13.703 END TEST rpc_integrity 00:04:13.703 ************************************ 00:04:13.703 12:12:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.703 00:04:13.703 real 0m0.337s 00:04:13.703 user 0m0.218s 00:04:13.703 sys 0m0.046s 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.703 12:12:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 12:12:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.963 12:12:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.963 12:12:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.963 12:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 ************************************ 00:04:13.963 START TEST rpc_plugins 00:04:13.963 ************************************ 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.963 { 00:04:13.963 "name": "Malloc1", 00:04:13.963 "aliases": [ 00:04:13.963 "9a8b52fa-8a48-4f73-8ba0-73633436675a" 00:04:13.963 ], 00:04:13.963 "product_name": "Malloc disk", 00:04:13.963 "block_size": 4096, 00:04:13.963 "num_blocks": 256, 00:04:13.963 "uuid": "9a8b52fa-8a48-4f73-8ba0-73633436675a", 00:04:13.963 "assigned_rate_limits": { 00:04:13.963 "rw_ios_per_sec": 0, 00:04:13.963 "rw_mbytes_per_sec": 0, 00:04:13.963 "r_mbytes_per_sec": 0, 00:04:13.963 "w_mbytes_per_sec": 0 00:04:13.963 }, 00:04:13.963 "claimed": false, 00:04:13.963 "zoned": false, 00:04:13.963 "supported_io_types": { 00:04:13.963 "read": true, 00:04:13.963 "write": true, 00:04:13.963 "unmap": true, 00:04:13.963 "flush": true, 00:04:13.963 "reset": true, 00:04:13.963 "nvme_admin": false, 00:04:13.963 "nvme_io": false, 00:04:13.963 "nvme_io_md": false, 00:04:13.963 "write_zeroes": true, 00:04:13.963 "zcopy": true, 00:04:13.963 "get_zone_info": false, 00:04:13.963 "zone_management": false, 00:04:13.963 "zone_append": false, 00:04:13.963 "compare": false, 00:04:13.963 "compare_and_write": false, 00:04:13.963 "abort": true, 00:04:13.963 "seek_hole": false, 00:04:13.963 "seek_data": false, 00:04:13.963 "copy": true, 00:04:13.963 "nvme_iov_md": false 00:04:13.963 }, 00:04:13.963 "memory_domains": [ 00:04:13.963 { 00:04:13.963 "dma_device_id": "system", 00:04:13.963 "dma_device_type": 1 00:04:13.963 }, 00:04:13.963 { 00:04:13.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.963 "dma_device_type": 2 00:04:13.963 } 00:04:13.963 ], 00:04:13.963 "driver_specific": {} 00:04:13.963 } 00:04:13.963 ]' 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.963 ************************************ 00:04:13.963 END TEST rpc_plugins 00:04:13.963 ************************************ 00:04:13.963 12:12:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.963 00:04:13.963 real 0m0.162s 00:04:13.963 user 0m0.109s 00:04:13.963 sys 0m0.015s 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.963 12:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 12:12:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.963 12:12:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.963 12:12:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.963 12:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 ************************************ 00:04:13.963 START TEST rpc_trace_cmd_test 00:04:13.963 ************************************ 00:04:13.963 12:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:13.963 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:13.963 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.963 12:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.963 12:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.223 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56655", 00:04:14.223 "tpoint_group_mask": "0x8", 00:04:14.223 "iscsi_conn": { 00:04:14.223 "mask": "0x2", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "scsi": { 00:04:14.223 "mask": "0x4", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "bdev": { 00:04:14.223 "mask": "0x8", 00:04:14.223 "tpoint_mask": "0xffffffffffffffff" 00:04:14.223 }, 00:04:14.223 "nvmf_rdma": { 00:04:14.223 "mask": "0x10", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "nvmf_tcp": { 00:04:14.223 "mask": "0x20", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "ftl": { 00:04:14.223 "mask": "0x40", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "blobfs": { 00:04:14.223 "mask": "0x80", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "dsa": { 00:04:14.223 "mask": "0x200", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "thread": { 00:04:14.223 "mask": "0x400", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "nvme_pcie": { 00:04:14.223 "mask": "0x800", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "iaa": { 00:04:14.223 "mask": "0x1000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "nvme_tcp": { 00:04:14.223 "mask": "0x2000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "bdev_nvme": { 00:04:14.223 "mask": "0x4000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "sock": { 00:04:14.223 "mask": "0x8000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "blob": { 00:04:14.223 "mask": "0x10000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "bdev_raid": { 00:04:14.223 "mask": "0x20000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 }, 00:04:14.223 "scheduler": { 00:04:14.223 "mask": "0x40000", 00:04:14.223 "tpoint_mask": "0x0" 00:04:14.223 } 00:04:14.223 }' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.223 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.483 ************************************ 00:04:14.483 END TEST rpc_trace_cmd_test 00:04:14.483 ************************************ 00:04:14.483 12:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.483 00:04:14.483 real 0m0.282s 00:04:14.483 user 0m0.243s 00:04:14.483 sys 0m0.027s 00:04:14.483 12:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.483 12:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 12:12:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.483 12:12:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.483 12:12:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.483 12:12:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.483 12:12:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.483 12:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 ************************************ 00:04:14.483 START TEST rpc_daemon_integrity 00:04:14.483 ************************************ 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.483 12:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.483 { 00:04:14.483 "name": "Malloc2", 00:04:14.483 "aliases": [ 00:04:14.483 "15fd0038-bb50-4006-95ed-fe86154399f1" 00:04:14.483 ], 00:04:14.483 "product_name": "Malloc disk", 00:04:14.483 "block_size": 512, 00:04:14.483 "num_blocks": 16384, 00:04:14.483 "uuid": "15fd0038-bb50-4006-95ed-fe86154399f1", 00:04:14.483 "assigned_rate_limits": { 00:04:14.483 "rw_ios_per_sec": 0, 00:04:14.483 "rw_mbytes_per_sec": 0, 00:04:14.483 "r_mbytes_per_sec": 0, 00:04:14.483 "w_mbytes_per_sec": 0 00:04:14.483 }, 00:04:14.483 "claimed": false, 00:04:14.483 "zoned": false, 00:04:14.483 "supported_io_types": { 00:04:14.483 "read": true, 00:04:14.483 "write": true, 00:04:14.483 "unmap": true, 00:04:14.483 "flush": true, 00:04:14.483 "reset": true, 00:04:14.483 "nvme_admin": false, 00:04:14.483 "nvme_io": false, 00:04:14.483 "nvme_io_md": false, 00:04:14.483 "write_zeroes": true, 00:04:14.483 "zcopy": true, 00:04:14.483 "get_zone_info": false, 00:04:14.483 "zone_management": false, 00:04:14.483 "zone_append": false, 00:04:14.483 "compare": false, 00:04:14.483 "compare_and_write": false, 00:04:14.483 "abort": true, 00:04:14.483 "seek_hole": false, 00:04:14.483 "seek_data": false, 00:04:14.483 "copy": true, 00:04:14.483 "nvme_iov_md": false 00:04:14.483 }, 00:04:14.483 "memory_domains": [ 00:04:14.483 { 00:04:14.483 "dma_device_id": "system", 00:04:14.483 "dma_device_type": 1 00:04:14.483 }, 00:04:14.483 { 00:04:14.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.483 "dma_device_type": 2 00:04:14.483 } 00:04:14.483 ], 00:04:14.483 "driver_specific": {} 00:04:14.483 } 00:04:14.483 ]' 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.483 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 [2024-12-06 12:12:01.089896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.483 [2024-12-06 12:12:01.089936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.484 [2024-12-06 12:12:01.089956] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2282270 00:04:14.484 [2024-12-06 12:12:01.089964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.484 [2024-12-06 12:12:01.091113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.484 [2024-12-06 12:12:01.091148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.484 Passthru0 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.484 { 00:04:14.484 "name": "Malloc2", 00:04:14.484 "aliases": [ 00:04:14.484 "15fd0038-bb50-4006-95ed-fe86154399f1" 00:04:14.484 ], 00:04:14.484 "product_name": "Malloc disk", 00:04:14.484 "block_size": 512, 00:04:14.484 "num_blocks": 16384, 00:04:14.484 "uuid": "15fd0038-bb50-4006-95ed-fe86154399f1", 00:04:14.484 "assigned_rate_limits": { 00:04:14.484 "rw_ios_per_sec": 0, 00:04:14.484 "rw_mbytes_per_sec": 0, 00:04:14.484 "r_mbytes_per_sec": 0, 00:04:14.484 "w_mbytes_per_sec": 0 00:04:14.484 }, 00:04:14.484 "claimed": true, 00:04:14.484 "claim_type": "exclusive_write", 00:04:14.484 "zoned": false, 00:04:14.484 "supported_io_types": { 00:04:14.484 "read": true, 00:04:14.484 "write": true, 00:04:14.484 "unmap": true, 00:04:14.484 "flush": true, 00:04:14.484 "reset": true, 00:04:14.484 "nvme_admin": false, 00:04:14.484 "nvme_io": false, 00:04:14.484 "nvme_io_md": false, 00:04:14.484 "write_zeroes": true, 00:04:14.484 "zcopy": true, 00:04:14.484 "get_zone_info": false, 00:04:14.484 "zone_management": false, 00:04:14.484 "zone_append": false, 00:04:14.484 "compare": false, 00:04:14.484 "compare_and_write": false, 00:04:14.484 "abort": true, 00:04:14.484 "seek_hole": false, 00:04:14.484 "seek_data": false, 00:04:14.484 "copy": true, 00:04:14.484 "nvme_iov_md": false 00:04:14.484 }, 00:04:14.484 "memory_domains": [ 00:04:14.484 { 00:04:14.484 "dma_device_id": "system", 00:04:14.484 "dma_device_type": 1 00:04:14.484 }, 00:04:14.484 { 00:04:14.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.484 "dma_device_type": 2 00:04:14.484 } 00:04:14.484 ], 00:04:14.484 "driver_specific": {} 00:04:14.484 }, 00:04:14.484 { 00:04:14.484 "name": "Passthru0", 00:04:14.484 "aliases": [ 00:04:14.484 "a038f409-ad6c-535a-ac7e-43ffae43cdd8" 00:04:14.484 ], 00:04:14.484 "product_name": "passthru", 00:04:14.484 "block_size": 512, 00:04:14.484 "num_blocks": 16384, 00:04:14.484 "uuid": "a038f409-ad6c-535a-ac7e-43ffae43cdd8", 00:04:14.484 "assigned_rate_limits": { 00:04:14.484 "rw_ios_per_sec": 0, 00:04:14.484 "rw_mbytes_per_sec": 0, 00:04:14.484 "r_mbytes_per_sec": 0, 00:04:14.484 "w_mbytes_per_sec": 0 00:04:14.484 }, 00:04:14.484 "claimed": false, 00:04:14.484 "zoned": false, 00:04:14.484 "supported_io_types": { 00:04:14.484 "read": true, 00:04:14.484 "write": true, 00:04:14.484 "unmap": true, 00:04:14.484 "flush": true, 00:04:14.484 "reset": true, 00:04:14.484 "nvme_admin": false, 00:04:14.484 "nvme_io": false, 00:04:14.484 "nvme_io_md": false, 00:04:14.484 "write_zeroes": true, 00:04:14.484 "zcopy": true, 00:04:14.484 "get_zone_info": false, 00:04:14.484 "zone_management": false, 00:04:14.484 "zone_append": false, 00:04:14.484 "compare": false, 00:04:14.484 "compare_and_write": false, 00:04:14.484 "abort": true, 00:04:14.484 "seek_hole": false, 00:04:14.484 "seek_data": false, 00:04:14.484 "copy": true, 00:04:14.484 "nvme_iov_md": false 00:04:14.484 }, 00:04:14.484 "memory_domains": [ 00:04:14.484 { 00:04:14.484 "dma_device_id": "system", 00:04:14.484 "dma_device_type": 1 00:04:14.484 }, 00:04:14.484 { 00:04:14.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.484 "dma_device_type": 2 00:04:14.484 } 00:04:14.484 ], 00:04:14.484 "driver_specific": { 00:04:14.484 "passthru": { 00:04:14.484 "name": "Passthru0", 00:04:14.484 "base_bdev_name": "Malloc2" 00:04:14.484 } 00:04:14.484 } 00:04:14.484 } 00:04:14.484 ]' 00:04:14.484 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.743 ************************************ 00:04:14.743 END TEST rpc_daemon_integrity 00:04:14.743 ************************************ 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.743 00:04:14.743 real 0m0.315s 00:04:14.743 user 0m0.211s 00:04:14.743 sys 0m0.040s 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.743 12:12:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.743 12:12:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.743 12:12:01 rpc -- rpc/rpc.sh@84 -- # killprocess 56655 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 56655 ']' 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@958 -- # kill -0 56655 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56655 00:04:14.743 killing process with pid 56655 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56655' 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@973 -- # kill 56655 00:04:14.743 12:12:01 rpc -- common/autotest_common.sh@978 -- # wait 56655 00:04:15.002 00:04:15.002 real 0m2.157s 00:04:15.002 user 0m2.933s 00:04:15.002 sys 0m0.560s 00:04:15.002 12:12:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.002 12:12:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.002 ************************************ 00:04:15.002 END TEST rpc 00:04:15.002 ************************************ 00:04:15.002 12:12:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:15.002 12:12:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.002 12:12:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.002 12:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:15.002 ************************************ 00:04:15.002 START TEST skip_rpc 00:04:15.002 ************************************ 00:04:15.002 12:12:01 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:15.261 * Looking for test storage... 00:04:15.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.261 12:12:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.261 --rc genhtml_branch_coverage=1 00:04:15.261 --rc genhtml_function_coverage=1 00:04:15.261 --rc genhtml_legend=1 00:04:15.261 --rc geninfo_all_blocks=1 00:04:15.261 --rc geninfo_unexecuted_blocks=1 00:04:15.261 00:04:15.261 ' 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.261 --rc genhtml_branch_coverage=1 00:04:15.261 --rc genhtml_function_coverage=1 00:04:15.261 --rc genhtml_legend=1 00:04:15.261 --rc geninfo_all_blocks=1 00:04:15.261 --rc geninfo_unexecuted_blocks=1 00:04:15.261 00:04:15.261 ' 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.261 --rc genhtml_branch_coverage=1 00:04:15.261 --rc genhtml_function_coverage=1 00:04:15.261 --rc genhtml_legend=1 00:04:15.261 --rc geninfo_all_blocks=1 00:04:15.261 --rc geninfo_unexecuted_blocks=1 00:04:15.261 00:04:15.261 ' 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.261 --rc genhtml_branch_coverage=1 00:04:15.261 --rc genhtml_function_coverage=1 00:04:15.261 --rc genhtml_legend=1 00:04:15.261 --rc geninfo_all_blocks=1 00:04:15.261 --rc geninfo_unexecuted_blocks=1 00:04:15.261 00:04:15.261 ' 00:04:15.261 12:12:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.261 12:12:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:15.261 12:12:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.261 12:12:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.261 ************************************ 00:04:15.261 START TEST skip_rpc 00:04:15.261 ************************************ 00:04:15.261 12:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:15.261 12:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56848 00:04:15.261 12:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:15.261 12:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.261 12:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:15.261 [2024-12-06 12:12:01.884283] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:15.261 [2024-12-06 12:12:01.884374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56848 ] 00:04:15.521 [2024-12-06 12:12:02.025867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.521 [2024-12-06 12:12:02.055000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.521 [2024-12-06 12:12:02.093813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56848 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56848 ']' 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56848 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56848 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56848' 00:04:20.795 killing process with pid 56848 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56848 00:04:20.795 12:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56848 00:04:20.795 ************************************ 00:04:20.795 END TEST skip_rpc 00:04:20.795 ************************************ 00:04:20.795 00:04:20.795 real 0m5.267s 00:04:20.795 user 0m5.011s 00:04:20.795 sys 0m0.175s 00:04:20.795 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.795 12:12:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.795 12:12:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.795 12:12:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.795 12:12:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.795 12:12:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.795 ************************************ 00:04:20.795 START TEST skip_rpc_with_json 00:04:20.795 ************************************ 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56935 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56935 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56935 ']' 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.795 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.795 [2024-12-06 12:12:07.203228] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:20.795 [2024-12-06 12:12:07.203496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56935 ] 00:04:20.795 [2024-12-06 12:12:07.348805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.795 [2024-12-06 12:12:07.376895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.795 [2024-12-06 12:12:07.418000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.055 [2024-12-06 12:12:07.537050] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.055 request: 00:04:21.055 { 00:04:21.055 "trtype": "tcp", 00:04:21.055 "method": "nvmf_get_transports", 00:04:21.055 "req_id": 1 00:04:21.055 } 00:04:21.055 Got JSON-RPC error response 00:04:21.055 response: 00:04:21.055 { 00:04:21.055 "code": -19, 00:04:21.055 "message": "No such device" 00:04:21.055 } 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.055 [2024-12-06 12:12:07.549140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.055 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.315 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.315 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.315 { 00:04:21.315 "subsystems": [ 00:04:21.315 { 00:04:21.315 "subsystem": "fsdev", 00:04:21.315 "config": [ 00:04:21.315 { 00:04:21.315 "method": "fsdev_set_opts", 00:04:21.315 "params": { 00:04:21.315 "fsdev_io_pool_size": 65535, 00:04:21.315 "fsdev_io_cache_size": 256 00:04:21.315 } 00:04:21.315 } 00:04:21.315 ] 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "subsystem": "keyring", 00:04:21.315 "config": [] 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "subsystem": "iobuf", 00:04:21.315 "config": [ 00:04:21.315 { 00:04:21.315 "method": "iobuf_set_options", 00:04:21.315 "params": { 00:04:21.315 "small_pool_count": 8192, 00:04:21.315 "large_pool_count": 1024, 00:04:21.315 "small_bufsize": 8192, 00:04:21.315 "large_bufsize": 135168, 00:04:21.315 "enable_numa": false 00:04:21.315 } 00:04:21.315 } 00:04:21.315 ] 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "subsystem": "sock", 00:04:21.315 "config": [ 00:04:21.315 { 00:04:21.315 "method": "sock_set_default_impl", 00:04:21.315 "params": { 00:04:21.315 "impl_name": "uring" 00:04:21.315 } 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "method": "sock_impl_set_options", 00:04:21.315 "params": { 00:04:21.315 "impl_name": "ssl", 00:04:21.315 "recv_buf_size": 4096, 00:04:21.315 "send_buf_size": 4096, 00:04:21.315 "enable_recv_pipe": true, 00:04:21.315 "enable_quickack": false, 00:04:21.315 "enable_placement_id": 0, 00:04:21.315 "enable_zerocopy_send_server": true, 00:04:21.315 "enable_zerocopy_send_client": false, 00:04:21.315 "zerocopy_threshold": 0, 00:04:21.315 "tls_version": 0, 00:04:21.315 "enable_ktls": false 00:04:21.315 } 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "method": "sock_impl_set_options", 00:04:21.315 "params": { 00:04:21.315 "impl_name": "posix", 00:04:21.315 "recv_buf_size": 2097152, 00:04:21.315 "send_buf_size": 2097152, 00:04:21.315 "enable_recv_pipe": true, 00:04:21.315 "enable_quickack": false, 00:04:21.315 "enable_placement_id": 0, 00:04:21.315 "enable_zerocopy_send_server": true, 00:04:21.315 "enable_zerocopy_send_client": false, 00:04:21.315 "zerocopy_threshold": 0, 00:04:21.315 "tls_version": 0, 00:04:21.315 "enable_ktls": false 00:04:21.315 } 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "method": "sock_impl_set_options", 00:04:21.315 "params": { 00:04:21.315 "impl_name": "uring", 00:04:21.315 "recv_buf_size": 2097152, 00:04:21.315 "send_buf_size": 2097152, 00:04:21.315 "enable_recv_pipe": true, 00:04:21.315 "enable_quickack": false, 00:04:21.315 "enable_placement_id": 0, 00:04:21.315 "enable_zerocopy_send_server": false, 00:04:21.315 "enable_zerocopy_send_client": false, 00:04:21.315 "zerocopy_threshold": 0, 00:04:21.315 "tls_version": 0, 00:04:21.315 "enable_ktls": false 00:04:21.315 } 00:04:21.315 } 00:04:21.315 ] 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "subsystem": "vmd", 00:04:21.315 "config": [] 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "subsystem": "accel", 00:04:21.315 "config": [ 00:04:21.315 { 00:04:21.315 "method": "accel_set_options", 00:04:21.315 "params": { 00:04:21.315 "small_cache_size": 128, 00:04:21.315 "large_cache_size": 16, 00:04:21.315 "task_count": 2048, 00:04:21.315 "sequence_count": 2048, 00:04:21.315 "buf_count": 2048 00:04:21.315 } 00:04:21.315 } 00:04:21.315 ] 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "subsystem": "bdev", 00:04:21.315 "config": [ 00:04:21.315 { 00:04:21.315 "method": "bdev_set_options", 00:04:21.315 "params": { 00:04:21.315 "bdev_io_pool_size": 65535, 00:04:21.315 "bdev_io_cache_size": 256, 00:04:21.315 "bdev_auto_examine": true, 00:04:21.315 "iobuf_small_cache_size": 128, 00:04:21.315 "iobuf_large_cache_size": 16 00:04:21.315 } 00:04:21.315 }, 00:04:21.315 { 00:04:21.315 "method": "bdev_raid_set_options", 00:04:21.315 "params": { 00:04:21.315 "process_window_size_kb": 1024, 00:04:21.315 "process_max_bandwidth_mb_sec": 0 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "bdev_iscsi_set_options", 00:04:21.316 "params": { 00:04:21.316 "timeout_sec": 30 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "bdev_nvme_set_options", 00:04:21.316 "params": { 00:04:21.316 "action_on_timeout": "none", 00:04:21.316 "timeout_us": 0, 00:04:21.316 "timeout_admin_us": 0, 00:04:21.316 "keep_alive_timeout_ms": 10000, 00:04:21.316 "arbitration_burst": 0, 00:04:21.316 "low_priority_weight": 0, 00:04:21.316 "medium_priority_weight": 0, 00:04:21.316 "high_priority_weight": 0, 00:04:21.316 "nvme_adminq_poll_period_us": 10000, 00:04:21.316 "nvme_ioq_poll_period_us": 0, 00:04:21.316 "io_queue_requests": 0, 00:04:21.316 "delay_cmd_submit": true, 00:04:21.316 "transport_retry_count": 4, 00:04:21.316 "bdev_retry_count": 3, 00:04:21.316 "transport_ack_timeout": 0, 00:04:21.316 "ctrlr_loss_timeout_sec": 0, 00:04:21.316 "reconnect_delay_sec": 0, 00:04:21.316 "fast_io_fail_timeout_sec": 0, 00:04:21.316 "disable_auto_failback": false, 00:04:21.316 "generate_uuids": false, 00:04:21.316 "transport_tos": 0, 00:04:21.316 "nvme_error_stat": false, 00:04:21.316 "rdma_srq_size": 0, 00:04:21.316 "io_path_stat": false, 00:04:21.316 "allow_accel_sequence": false, 00:04:21.316 "rdma_max_cq_size": 0, 00:04:21.316 "rdma_cm_event_timeout_ms": 0, 00:04:21.316 "dhchap_digests": [ 00:04:21.316 "sha256", 00:04:21.316 "sha384", 00:04:21.316 "sha512" 00:04:21.316 ], 00:04:21.316 "dhchap_dhgroups": [ 00:04:21.316 "null", 00:04:21.316 "ffdhe2048", 00:04:21.316 "ffdhe3072", 00:04:21.316 "ffdhe4096", 00:04:21.316 "ffdhe6144", 00:04:21.316 "ffdhe8192" 00:04:21.316 ] 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "bdev_nvme_set_hotplug", 00:04:21.316 "params": { 00:04:21.316 "period_us": 100000, 00:04:21.316 "enable": false 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "bdev_wait_for_examine" 00:04:21.316 } 00:04:21.316 ] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "scsi", 00:04:21.316 "config": null 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "scheduler", 00:04:21.316 "config": [ 00:04:21.316 { 00:04:21.316 "method": "framework_set_scheduler", 00:04:21.316 "params": { 00:04:21.316 "name": "static" 00:04:21.316 } 00:04:21.316 } 00:04:21.316 ] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "vhost_scsi", 00:04:21.316 "config": [] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "vhost_blk", 00:04:21.316 "config": [] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "ublk", 00:04:21.316 "config": [] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "nbd", 00:04:21.316 "config": [] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "nvmf", 00:04:21.316 "config": [ 00:04:21.316 { 00:04:21.316 "method": "nvmf_set_config", 00:04:21.316 "params": { 00:04:21.316 "discovery_filter": "match_any", 00:04:21.316 "admin_cmd_passthru": { 00:04:21.316 "identify_ctrlr": false 00:04:21.316 }, 00:04:21.316 "dhchap_digests": [ 00:04:21.316 "sha256", 00:04:21.316 "sha384", 00:04:21.316 "sha512" 00:04:21.316 ], 00:04:21.316 "dhchap_dhgroups": [ 00:04:21.316 "null", 00:04:21.316 "ffdhe2048", 00:04:21.316 "ffdhe3072", 00:04:21.316 "ffdhe4096", 00:04:21.316 "ffdhe6144", 00:04:21.316 "ffdhe8192" 00:04:21.316 ] 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "nvmf_set_max_subsystems", 00:04:21.316 "params": { 00:04:21.316 "max_subsystems": 1024 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "nvmf_set_crdt", 00:04:21.316 "params": { 00:04:21.316 "crdt1": 0, 00:04:21.316 "crdt2": 0, 00:04:21.316 "crdt3": 0 00:04:21.316 } 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "method": "nvmf_create_transport", 00:04:21.316 "params": { 00:04:21.316 "trtype": "TCP", 00:04:21.316 "max_queue_depth": 128, 00:04:21.316 "max_io_qpairs_per_ctrlr": 127, 00:04:21.316 "in_capsule_data_size": 4096, 00:04:21.316 "max_io_size": 131072, 00:04:21.316 "io_unit_size": 131072, 00:04:21.316 "max_aq_depth": 128, 00:04:21.316 "num_shared_buffers": 511, 00:04:21.316 "buf_cache_size": 4294967295, 00:04:21.316 "dif_insert_or_strip": false, 00:04:21.316 "zcopy": false, 00:04:21.316 "c2h_success": true, 00:04:21.316 "sock_priority": 0, 00:04:21.316 "abort_timeout_sec": 1, 00:04:21.316 "ack_timeout": 0, 00:04:21.316 "data_wr_pool_size": 0 00:04:21.316 } 00:04:21.316 } 00:04:21.316 ] 00:04:21.316 }, 00:04:21.316 { 00:04:21.316 "subsystem": "iscsi", 00:04:21.316 "config": [ 00:04:21.316 { 00:04:21.316 "method": "iscsi_set_options", 00:04:21.316 "params": { 00:04:21.316 "node_base": "iqn.2016-06.io.spdk", 00:04:21.316 "max_sessions": 128, 00:04:21.316 "max_connections_per_session": 2, 00:04:21.316 "max_queue_depth": 64, 00:04:21.316 "default_time2wait": 2, 00:04:21.316 "default_time2retain": 20, 00:04:21.316 "first_burst_length": 8192, 00:04:21.316 "immediate_data": true, 00:04:21.316 "allow_duplicated_isid": false, 00:04:21.316 "error_recovery_level": 0, 00:04:21.316 "nop_timeout": 60, 00:04:21.316 "nop_in_interval": 30, 00:04:21.316 "disable_chap": false, 00:04:21.316 "require_chap": false, 00:04:21.316 "mutual_chap": false, 00:04:21.316 "chap_group": 0, 00:04:21.316 "max_large_datain_per_connection": 64, 00:04:21.316 "max_r2t_per_connection": 4, 00:04:21.316 "pdu_pool_size": 36864, 00:04:21.316 "immediate_data_pool_size": 16384, 00:04:21.316 "data_out_pool_size": 2048 00:04:21.316 } 00:04:21.316 } 00:04:21.316 ] 00:04:21.316 } 00:04:21.316 ] 00:04:21.316 } 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56935 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56935 ']' 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56935 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56935 00:04:21.316 killing process with pid 56935 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56935' 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56935 00:04:21.316 12:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56935 00:04:21.577 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56949 00:04:21.577 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.577 12:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:26.852 12:12:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56949 00:04:26.852 12:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56949 ']' 00:04:26.852 12:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56949 00:04:26.852 12:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:26.852 12:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.852 12:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56949 00:04:26.852 killing process with pid 56949 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56949' 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56949 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56949 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.852 ************************************ 00:04:26.852 END TEST skip_rpc_with_json 00:04:26.852 ************************************ 00:04:26.852 00:04:26.852 real 0m6.110s 00:04:26.852 user 0m5.882s 00:04:26.852 sys 0m0.390s 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.852 12:12:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.852 12:12:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.852 12:12:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.852 12:12:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.852 ************************************ 00:04:26.852 START TEST skip_rpc_with_delay 00:04:26.852 ************************************ 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.852 [2024-12-06 12:12:13.368472] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.852 00:04:26.852 real 0m0.090s 00:04:26.852 user 0m0.060s 00:04:26.852 sys 0m0.028s 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.852 12:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.852 ************************************ 00:04:26.852 END TEST skip_rpc_with_delay 00:04:26.852 ************************************ 00:04:26.852 12:12:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.852 12:12:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.852 12:12:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.852 12:12:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.852 12:12:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.852 12:12:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.852 ************************************ 00:04:26.852 START TEST exit_on_failed_rpc_init 00:04:26.852 ************************************ 00:04:26.852 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:26.852 12:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57059 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57059 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57059 ']' 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.853 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.853 [2024-12-06 12:12:13.504147] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:26.853 [2024-12-06 12:12:13.504262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57059 ] 00:04:27.112 [2024-12-06 12:12:13.641946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.112 [2024-12-06 12:12:13.670454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.112 [2024-12-06 12:12:13.707026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:27.372 12:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.372 [2024-12-06 12:12:13.897661] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:27.372 [2024-12-06 12:12:13.897753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57069 ] 00:04:27.632 [2024-12-06 12:12:14.048842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.632 [2024-12-06 12:12:14.086975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.632 [2024-12-06 12:12:14.087385] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.632 [2024-12-06 12:12:14.087412] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.632 [2024-12-06 12:12:14.087423] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57059 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57059 ']' 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57059 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57059 00:04:27.632 killing process with pid 57059 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57059' 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57059 00:04:27.632 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57059 00:04:27.904 ************************************ 00:04:27.904 END TEST exit_on_failed_rpc_init 00:04:27.904 ************************************ 00:04:27.904 00:04:27.904 real 0m0.958s 00:04:27.904 user 0m1.129s 00:04:27.904 sys 0m0.266s 00:04:27.904 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.904 12:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.904 12:12:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.904 ************************************ 00:04:27.904 END TEST skip_rpc 00:04:27.904 ************************************ 00:04:27.904 00:04:27.904 real 0m12.835s 00:04:27.904 user 0m12.266s 00:04:27.904 sys 0m1.062s 00:04:27.904 12:12:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.904 12:12:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.904 12:12:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:27.904 12:12:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.904 12:12:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.904 12:12:14 -- common/autotest_common.sh@10 -- # set +x 00:04:27.904 ************************************ 00:04:27.904 START TEST rpc_client 00:04:27.904 ************************************ 00:04:27.904 12:12:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:28.197 * Looking for test storage... 00:04:28.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:28.197 12:12:14 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.197 12:12:14 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.197 12:12:14 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.197 12:12:14 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:28.197 12:12:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.198 12:12:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.198 --rc genhtml_branch_coverage=1 00:04:28.198 --rc genhtml_function_coverage=1 00:04:28.198 --rc genhtml_legend=1 00:04:28.198 --rc geninfo_all_blocks=1 00:04:28.198 --rc geninfo_unexecuted_blocks=1 00:04:28.198 00:04:28.198 ' 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.198 --rc genhtml_branch_coverage=1 00:04:28.198 --rc genhtml_function_coverage=1 00:04:28.198 --rc genhtml_legend=1 00:04:28.198 --rc geninfo_all_blocks=1 00:04:28.198 --rc geninfo_unexecuted_blocks=1 00:04:28.198 00:04:28.198 ' 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.198 --rc genhtml_branch_coverage=1 00:04:28.198 --rc genhtml_function_coverage=1 00:04:28.198 --rc genhtml_legend=1 00:04:28.198 --rc geninfo_all_blocks=1 00:04:28.198 --rc geninfo_unexecuted_blocks=1 00:04:28.198 00:04:28.198 ' 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.198 --rc genhtml_branch_coverage=1 00:04:28.198 --rc genhtml_function_coverage=1 00:04:28.198 --rc genhtml_legend=1 00:04:28.198 --rc geninfo_all_blocks=1 00:04:28.198 --rc geninfo_unexecuted_blocks=1 00:04:28.198 00:04:28.198 ' 00:04:28.198 12:12:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:28.198 OK 00:04:28.198 12:12:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.198 00:04:28.198 real 0m0.223s 00:04:28.198 user 0m0.135s 00:04:28.198 sys 0m0.087s 00:04:28.198 ************************************ 00:04:28.198 END TEST rpc_client 00:04:28.198 ************************************ 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.198 12:12:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.198 12:12:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:28.198 12:12:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.198 12:12:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.198 12:12:14 -- common/autotest_common.sh@10 -- # set +x 00:04:28.198 ************************************ 00:04:28.198 START TEST json_config 00:04:28.198 ************************************ 00:04:28.198 12:12:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:28.198 12:12:14 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.198 12:12:14 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.198 12:12:14 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.466 12:12:14 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.466 12:12:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.466 12:12:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.466 12:12:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.466 12:12:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.466 12:12:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.466 12:12:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:28.466 12:12:14 json_config -- scripts/common.sh@345 -- # : 1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.466 12:12:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.466 12:12:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@353 -- # local d=1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.466 12:12:14 json_config -- scripts/common.sh@355 -- # echo 1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.466 12:12:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@353 -- # local d=2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.466 12:12:14 json_config -- scripts/common.sh@355 -- # echo 2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.466 12:12:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.466 12:12:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.466 12:12:14 json_config -- scripts/common.sh@368 -- # return 0 00:04:28.466 12:12:14 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.466 12:12:14 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.466 --rc genhtml_branch_coverage=1 00:04:28.466 --rc genhtml_function_coverage=1 00:04:28.466 --rc genhtml_legend=1 00:04:28.466 --rc geninfo_all_blocks=1 00:04:28.466 --rc geninfo_unexecuted_blocks=1 00:04:28.466 00:04:28.466 ' 00:04:28.466 12:12:14 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.466 --rc genhtml_branch_coverage=1 00:04:28.466 --rc genhtml_function_coverage=1 00:04:28.466 --rc genhtml_legend=1 00:04:28.467 --rc geninfo_all_blocks=1 00:04:28.467 --rc geninfo_unexecuted_blocks=1 00:04:28.467 00:04:28.467 ' 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.467 --rc genhtml_branch_coverage=1 00:04:28.467 --rc genhtml_function_coverage=1 00:04:28.467 --rc genhtml_legend=1 00:04:28.467 --rc geninfo_all_blocks=1 00:04:28.467 --rc geninfo_unexecuted_blocks=1 00:04:28.467 00:04:28.467 ' 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.467 --rc genhtml_branch_coverage=1 00:04:28.467 --rc genhtml_function_coverage=1 00:04:28.467 --rc genhtml_legend=1 00:04:28.467 --rc geninfo_all_blocks=1 00:04:28.467 --rc geninfo_unexecuted_blocks=1 00:04:28.467 00:04:28.467 ' 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:28.467 12:12:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.467 12:12:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.467 12:12:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.467 12:12:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.467 12:12:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.467 12:12:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.467 12:12:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.467 12:12:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.467 12:12:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@51 -- # : 0 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.467 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.467 12:12:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:28.467 INFO: JSON configuration test init 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.467 Waiting for target to run... 00:04:28.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.467 12:12:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.467 12:12:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.467 12:12:14 json_config -- json_config/common.sh@10 -- # shift 00:04:28.467 12:12:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.467 12:12:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.467 12:12:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.467 12:12:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.467 12:12:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.467 12:12:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57203 00:04:28.467 12:12:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.467 12:12:14 json_config -- json_config/common.sh@25 -- # waitforlisten 57203 /var/tmp/spdk_tgt.sock 00:04:28.467 12:12:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@835 -- # '[' -z 57203 ']' 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.467 12:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.467 [2024-12-06 12:12:15.040088] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:28.467 [2024-12-06 12:12:15.040200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57203 ] 00:04:28.725 [2024-12-06 12:12:15.367059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.984 [2024-12-06 12:12:15.390362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.550 00:04:29.550 12:12:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.550 12:12:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:29.550 12:12:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.550 12:12:16 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:29.550 12:12:16 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:29.550 12:12:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.550 12:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.550 12:12:16 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:29.550 12:12:16 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:29.550 12:12:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.550 12:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.550 12:12:16 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.550 12:12:16 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:29.550 12:12:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.808 [2024-12-06 12:12:16.342141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:30.066 12:12:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.066 12:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:30.066 12:12:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:30.066 12:12:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@54 -- # sort 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:30.324 12:12:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.324 12:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:30.324 12:12:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.324 12:12:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:30.324 12:12:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.324 12:12:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.582 MallocForNvmf0 00:04:30.582 12:12:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.582 12:12:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.841 MallocForNvmf1 00:04:30.841 12:12:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.841 12:12:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.841 [2024-12-06 12:12:17.475873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.841 12:12:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.841 12:12:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.408 12:12:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.408 12:12:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.408 12:12:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.408 12:12:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.667 12:12:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.667 12:12:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.925 [2024-12-06 12:12:18.444324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.925 12:12:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.925 12:12:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.925 12:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.925 12:12:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.925 12:12:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.925 12:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.925 12:12:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.925 12:12:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.925 12:12:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:32.184 MallocBdevForConfigChangeCheck 00:04:32.184 12:12:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:32.184 12:12:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.184 12:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.184 12:12:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:32.184 12:12:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.752 INFO: shutting down applications... 00:04:32.752 12:12:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:32.752 12:12:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:32.752 12:12:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:32.752 12:12:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:32.752 12:12:19 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.011 Calling clear_iscsi_subsystem 00:04:33.011 Calling clear_nvmf_subsystem 00:04:33.011 Calling clear_nbd_subsystem 00:04:33.011 Calling clear_ublk_subsystem 00:04:33.011 Calling clear_vhost_blk_subsystem 00:04:33.011 Calling clear_vhost_scsi_subsystem 00:04:33.011 Calling clear_bdev_subsystem 00:04:33.011 12:12:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:33.011 12:12:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:33.011 12:12:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:33.011 12:12:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.011 12:12:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.011 12:12:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.271 12:12:19 json_config -- json_config/json_config.sh@352 -- # break 00:04:33.271 12:12:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:33.271 12:12:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:33.271 12:12:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:33.271 12:12:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.271 12:12:19 json_config -- json_config/common.sh@35 -- # [[ -n 57203 ]] 00:04:33.271 12:12:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57203 00:04:33.271 12:12:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.271 12:12:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.271 12:12:19 json_config -- json_config/common.sh@41 -- # kill -0 57203 00:04:33.271 12:12:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.839 12:12:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.839 12:12:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.839 SPDK target shutdown done 00:04:33.839 INFO: relaunching applications... 00:04:33.839 12:12:20 json_config -- json_config/common.sh@41 -- # kill -0 57203 00:04:33.839 12:12:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.839 12:12:20 json_config -- json_config/common.sh@43 -- # break 00:04:33.839 12:12:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.839 12:12:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.839 12:12:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:33.839 12:12:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.839 12:12:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.839 12:12:20 json_config -- json_config/common.sh@10 -- # shift 00:04:33.839 12:12:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.839 12:12:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.839 12:12:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.839 12:12:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.839 12:12:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.839 12:12:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57399 00:04:33.839 12:12:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.839 12:12:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.839 Waiting for target to run... 00:04:33.839 12:12:20 json_config -- json_config/common.sh@25 -- # waitforlisten 57399 /var/tmp/spdk_tgt.sock 00:04:33.839 12:12:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 57399 ']' 00:04:33.839 12:12:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.839 12:12:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.839 12:12:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.839 12:12:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.839 12:12:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.839 [2024-12-06 12:12:20.489899] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:33.839 [2024-12-06 12:12:20.490243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57399 ] 00:04:34.408 [2024-12-06 12:12:20.801492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.408 [2024-12-06 12:12:20.822368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.408 [2024-12-06 12:12:20.952039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.667 [2024-12-06 12:12:21.144977] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.667 [2024-12-06 12:12:21.177091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.926 00:04:34.926 INFO: Checking if target configuration is the same... 00:04:34.926 12:12:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.926 12:12:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:34.926 12:12:21 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.926 12:12:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.926 12:12:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.926 12:12:21 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.926 12:12:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.926 12:12:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.926 + '[' 2 -ne 2 ']' 00:04:34.926 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.926 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.926 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.926 +++ basename /dev/fd/62 00:04:34.926 ++ mktemp /tmp/62.XXX 00:04:34.926 + tmp_file_1=/tmp/62.yui 00:04:34.926 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.926 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.926 + tmp_file_2=/tmp/spdk_tgt_config.json.GjR 00:04:34.926 + ret=0 00:04:34.926 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.186 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.186 + diff -u /tmp/62.yui /tmp/spdk_tgt_config.json.GjR 00:04:35.186 INFO: JSON config files are the same 00:04:35.186 + echo 'INFO: JSON config files are the same' 00:04:35.186 + rm /tmp/62.yui /tmp/spdk_tgt_config.json.GjR 00:04:35.186 + exit 0 00:04:35.186 INFO: changing configuration and checking if this can be detected... 00:04:35.186 12:12:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:35.186 12:12:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.186 12:12:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.186 12:12:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.754 12:12:22 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.754 12:12:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:35.754 12:12:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.754 + '[' 2 -ne 2 ']' 00:04:35.754 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:35.754 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:35.754 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:35.754 +++ basename /dev/fd/62 00:04:35.754 ++ mktemp /tmp/62.XXX 00:04:35.754 + tmp_file_1=/tmp/62.AMg 00:04:35.754 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.754 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.754 + tmp_file_2=/tmp/spdk_tgt_config.json.s3I 00:04:35.754 + ret=0 00:04:35.754 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:36.014 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:36.014 + diff -u /tmp/62.AMg /tmp/spdk_tgt_config.json.s3I 00:04:36.014 + ret=1 00:04:36.014 + echo '=== Start of file: /tmp/62.AMg ===' 00:04:36.014 + cat /tmp/62.AMg 00:04:36.014 + echo '=== End of file: /tmp/62.AMg ===' 00:04:36.014 + echo '' 00:04:36.014 + echo '=== Start of file: /tmp/spdk_tgt_config.json.s3I ===' 00:04:36.014 + cat /tmp/spdk_tgt_config.json.s3I 00:04:36.014 + echo '=== End of file: /tmp/spdk_tgt_config.json.s3I ===' 00:04:36.014 + echo '' 00:04:36.014 + rm /tmp/62.AMg /tmp/spdk_tgt_config.json.s3I 00:04:36.014 + exit 1 00:04:36.014 INFO: configuration change detected. 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 57399 ]] 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.014 12:12:22 json_config -- json_config/json_config.sh@330 -- # killprocess 57399 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@954 -- # '[' -z 57399 ']' 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@958 -- # kill -0 57399 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@959 -- # uname 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57399 00:04:36.014 killing process with pid 57399 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57399' 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@973 -- # kill 57399 00:04:36.014 12:12:22 json_config -- common/autotest_common.sh@978 -- # wait 57399 00:04:36.274 12:12:22 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.274 12:12:22 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:36.274 12:12:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.274 12:12:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.274 INFO: Success 00:04:36.274 12:12:22 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:36.274 12:12:22 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:36.274 ************************************ 00:04:36.274 END TEST json_config 00:04:36.274 ************************************ 00:04:36.274 00:04:36.274 real 0m8.085s 00:04:36.274 user 0m11.618s 00:04:36.274 sys 0m1.357s 00:04:36.274 12:12:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.274 12:12:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.274 12:12:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.274 12:12:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.274 12:12:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.274 12:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:36.274 ************************************ 00:04:36.274 START TEST json_config_extra_key 00:04:36.274 ************************************ 00:04:36.274 12:12:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.534 12:12:22 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.534 12:12:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.534 12:12:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.534 12:12:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.534 12:12:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.534 12:12:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.534 12:12:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.534 --rc genhtml_branch_coverage=1 00:04:36.534 --rc genhtml_function_coverage=1 00:04:36.534 --rc genhtml_legend=1 00:04:36.534 --rc geninfo_all_blocks=1 00:04:36.534 --rc geninfo_unexecuted_blocks=1 00:04:36.534 00:04:36.534 ' 00:04:36.534 12:12:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.534 --rc genhtml_branch_coverage=1 00:04:36.534 --rc genhtml_function_coverage=1 00:04:36.534 --rc genhtml_legend=1 00:04:36.534 --rc geninfo_all_blocks=1 00:04:36.534 --rc geninfo_unexecuted_blocks=1 00:04:36.534 00:04:36.534 ' 00:04:36.534 12:12:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.534 --rc genhtml_branch_coverage=1 00:04:36.534 --rc genhtml_function_coverage=1 00:04:36.534 --rc genhtml_legend=1 00:04:36.534 --rc geninfo_all_blocks=1 00:04:36.534 --rc geninfo_unexecuted_blocks=1 00:04:36.534 00:04:36.534 ' 00:04:36.534 12:12:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.534 --rc genhtml_branch_coverage=1 00:04:36.534 --rc genhtml_function_coverage=1 00:04:36.534 --rc genhtml_legend=1 00:04:36.534 --rc geninfo_all_blocks=1 00:04:36.534 --rc geninfo_unexecuted_blocks=1 00:04:36.534 00:04:36.534 ' 00:04:36.534 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.535 12:12:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.535 12:12:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.535 12:12:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.535 12:12:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.535 12:12:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.535 12:12:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.535 12:12:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.535 12:12:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.535 12:12:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.535 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.535 12:12:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.535 INFO: launching applications... 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.535 12:12:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57547 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.535 Waiting for target to run... 00:04:36.535 12:12:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57547 /var/tmp/spdk_tgt.sock 00:04:36.535 12:12:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57547 ']' 00:04:36.535 12:12:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.535 12:12:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.535 12:12:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.535 12:12:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.535 12:12:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.535 [2024-12-06 12:12:23.174071] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:36.535 [2024-12-06 12:12:23.174602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57547 ] 00:04:37.104 [2024-12-06 12:12:23.484118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.104 [2024-12-06 12:12:23.505709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.104 [2024-12-06 12:12:23.530716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.673 00:04:37.673 INFO: shutting down applications... 00:04:37.673 12:12:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.673 12:12:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.673 12:12:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.673 12:12:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57547 ]] 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57547 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57547 00:04:37.673 12:12:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57547 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.241 12:12:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.241 SPDK target shutdown done 00:04:38.241 12:12:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:38.242 Success 00:04:38.242 00:04:38.242 real 0m1.787s 00:04:38.242 user 0m1.620s 00:04:38.242 sys 0m0.342s 00:04:38.242 ************************************ 00:04:38.242 END TEST json_config_extra_key 00:04:38.242 ************************************ 00:04:38.242 12:12:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.242 12:12:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.242 12:12:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.242 12:12:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.242 12:12:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.242 12:12:24 -- common/autotest_common.sh@10 -- # set +x 00:04:38.242 ************************************ 00:04:38.242 START TEST alias_rpc 00:04:38.242 ************************************ 00:04:38.242 12:12:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.242 * Looking for test storage... 00:04:38.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:38.242 12:12:24 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.242 12:12:24 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.242 12:12:24 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.501 12:12:24 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.501 12:12:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.501 12:12:24 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.501 12:12:24 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.501 --rc genhtml_branch_coverage=1 00:04:38.501 --rc genhtml_function_coverage=1 00:04:38.501 --rc genhtml_legend=1 00:04:38.501 --rc geninfo_all_blocks=1 00:04:38.501 --rc geninfo_unexecuted_blocks=1 00:04:38.501 00:04:38.501 ' 00:04:38.501 12:12:24 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.501 --rc genhtml_branch_coverage=1 00:04:38.501 --rc genhtml_function_coverage=1 00:04:38.501 --rc genhtml_legend=1 00:04:38.501 --rc geninfo_all_blocks=1 00:04:38.501 --rc geninfo_unexecuted_blocks=1 00:04:38.501 00:04:38.501 ' 00:04:38.501 12:12:24 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.501 --rc genhtml_branch_coverage=1 00:04:38.501 --rc genhtml_function_coverage=1 00:04:38.501 --rc genhtml_legend=1 00:04:38.501 --rc geninfo_all_blocks=1 00:04:38.501 --rc geninfo_unexecuted_blocks=1 00:04:38.501 00:04:38.501 ' 00:04:38.501 12:12:24 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.502 --rc genhtml_branch_coverage=1 00:04:38.502 --rc genhtml_function_coverage=1 00:04:38.502 --rc genhtml_legend=1 00:04:38.502 --rc geninfo_all_blocks=1 00:04:38.502 --rc geninfo_unexecuted_blocks=1 00:04:38.502 00:04:38.502 ' 00:04:38.502 12:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.502 12:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57620 00:04:38.502 12:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.502 12:12:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57620 00:04:38.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.502 12:12:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57620 ']' 00:04:38.502 12:12:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.502 12:12:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.502 12:12:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.502 12:12:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.502 12:12:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 [2024-12-06 12:12:25.024464] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:38.502 [2024-12-06 12:12:25.024552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57620 ] 00:04:38.760 [2024-12-06 12:12:25.163527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.760 [2024-12-06 12:12:25.192138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.760 [2024-12-06 12:12:25.228551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:39.327 12:12:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.327 12:12:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.327 12:12:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:39.586 12:12:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57620 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57620 ']' 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57620 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57620 00:04:39.844 killing process with pid 57620 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.844 12:12:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57620' 00:04:39.845 12:12:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57620 00:04:39.845 12:12:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57620 00:04:39.845 ************************************ 00:04:39.845 END TEST alias_rpc 00:04:39.845 ************************************ 00:04:39.845 00:04:39.845 real 0m1.747s 00:04:39.845 user 0m2.071s 00:04:39.845 sys 0m0.343s 00:04:39.845 12:12:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.845 12:12:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.103 12:12:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:40.104 12:12:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.104 12:12:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.104 12:12:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.104 12:12:26 -- common/autotest_common.sh@10 -- # set +x 00:04:40.104 ************************************ 00:04:40.104 START TEST spdkcli_tcp 00:04:40.104 ************************************ 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.104 * Looking for test storage... 00:04:40.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.104 12:12:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.104 --rc genhtml_branch_coverage=1 00:04:40.104 --rc genhtml_function_coverage=1 00:04:40.104 --rc genhtml_legend=1 00:04:40.104 --rc geninfo_all_blocks=1 00:04:40.104 --rc geninfo_unexecuted_blocks=1 00:04:40.104 00:04:40.104 ' 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.104 --rc genhtml_branch_coverage=1 00:04:40.104 --rc genhtml_function_coverage=1 00:04:40.104 --rc genhtml_legend=1 00:04:40.104 --rc geninfo_all_blocks=1 00:04:40.104 --rc geninfo_unexecuted_blocks=1 00:04:40.104 00:04:40.104 ' 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.104 --rc genhtml_branch_coverage=1 00:04:40.104 --rc genhtml_function_coverage=1 00:04:40.104 --rc genhtml_legend=1 00:04:40.104 --rc geninfo_all_blocks=1 00:04:40.104 --rc geninfo_unexecuted_blocks=1 00:04:40.104 00:04:40.104 ' 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.104 --rc genhtml_branch_coverage=1 00:04:40.104 --rc genhtml_function_coverage=1 00:04:40.104 --rc genhtml_legend=1 00:04:40.104 --rc geninfo_all_blocks=1 00:04:40.104 --rc geninfo_unexecuted_blocks=1 00:04:40.104 00:04:40.104 ' 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57704 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57704 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57704 ']' 00:04:40.104 12:12:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.104 12:12:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.363 [2024-12-06 12:12:26.815649] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:40.363 [2024-12-06 12:12:26.815976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57704 ] 00:04:40.363 [2024-12-06 12:12:26.955395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.363 [2024-12-06 12:12:26.985276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.363 [2024-12-06 12:12:26.985283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.623 [2024-12-06 12:12:27.023829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.623 12:12:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.623 12:12:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:40.623 12:12:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57713 00:04:40.623 12:12:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.623 12:12:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.882 [ 00:04:40.882 "bdev_malloc_delete", 00:04:40.882 "bdev_malloc_create", 00:04:40.882 "bdev_null_resize", 00:04:40.882 "bdev_null_delete", 00:04:40.882 "bdev_null_create", 00:04:40.882 "bdev_nvme_cuse_unregister", 00:04:40.882 "bdev_nvme_cuse_register", 00:04:40.882 "bdev_opal_new_user", 00:04:40.882 "bdev_opal_set_lock_state", 00:04:40.882 "bdev_opal_delete", 00:04:40.882 "bdev_opal_get_info", 00:04:40.882 "bdev_opal_create", 00:04:40.882 "bdev_nvme_opal_revert", 00:04:40.882 "bdev_nvme_opal_init", 00:04:40.882 "bdev_nvme_send_cmd", 00:04:40.882 "bdev_nvme_set_keys", 00:04:40.882 "bdev_nvme_get_path_iostat", 00:04:40.882 "bdev_nvme_get_mdns_discovery_info", 00:04:40.882 "bdev_nvme_stop_mdns_discovery", 00:04:40.882 "bdev_nvme_start_mdns_discovery", 00:04:40.882 "bdev_nvme_set_multipath_policy", 00:04:40.882 "bdev_nvme_set_preferred_path", 00:04:40.882 "bdev_nvme_get_io_paths", 00:04:40.883 "bdev_nvme_remove_error_injection", 00:04:40.883 "bdev_nvme_add_error_injection", 00:04:40.883 "bdev_nvme_get_discovery_info", 00:04:40.883 "bdev_nvme_stop_discovery", 00:04:40.883 "bdev_nvme_start_discovery", 00:04:40.883 "bdev_nvme_get_controller_health_info", 00:04:40.883 "bdev_nvme_disable_controller", 00:04:40.883 "bdev_nvme_enable_controller", 00:04:40.883 "bdev_nvme_reset_controller", 00:04:40.883 "bdev_nvme_get_transport_statistics", 00:04:40.883 "bdev_nvme_apply_firmware", 00:04:40.883 "bdev_nvme_detach_controller", 00:04:40.883 "bdev_nvme_get_controllers", 00:04:40.883 "bdev_nvme_attach_controller", 00:04:40.883 "bdev_nvme_set_hotplug", 00:04:40.883 "bdev_nvme_set_options", 00:04:40.883 "bdev_passthru_delete", 00:04:40.883 "bdev_passthru_create", 00:04:40.883 "bdev_lvol_set_parent_bdev", 00:04:40.883 "bdev_lvol_set_parent", 00:04:40.883 "bdev_lvol_check_shallow_copy", 00:04:40.883 "bdev_lvol_start_shallow_copy", 00:04:40.883 "bdev_lvol_grow_lvstore", 00:04:40.883 "bdev_lvol_get_lvols", 00:04:40.883 "bdev_lvol_get_lvstores", 00:04:40.883 "bdev_lvol_delete", 00:04:40.883 "bdev_lvol_set_read_only", 00:04:40.883 "bdev_lvol_resize", 00:04:40.883 "bdev_lvol_decouple_parent", 00:04:40.883 "bdev_lvol_inflate", 00:04:40.883 "bdev_lvol_rename", 00:04:40.883 "bdev_lvol_clone_bdev", 00:04:40.883 "bdev_lvol_clone", 00:04:40.883 "bdev_lvol_snapshot", 00:04:40.883 "bdev_lvol_create", 00:04:40.883 "bdev_lvol_delete_lvstore", 00:04:40.883 "bdev_lvol_rename_lvstore", 00:04:40.883 "bdev_lvol_create_lvstore", 00:04:40.883 "bdev_raid_set_options", 00:04:40.883 "bdev_raid_remove_base_bdev", 00:04:40.883 "bdev_raid_add_base_bdev", 00:04:40.883 "bdev_raid_delete", 00:04:40.883 "bdev_raid_create", 00:04:40.883 "bdev_raid_get_bdevs", 00:04:40.883 "bdev_error_inject_error", 00:04:40.883 "bdev_error_delete", 00:04:40.883 "bdev_error_create", 00:04:40.883 "bdev_split_delete", 00:04:40.883 "bdev_split_create", 00:04:40.883 "bdev_delay_delete", 00:04:40.883 "bdev_delay_create", 00:04:40.883 "bdev_delay_update_latency", 00:04:40.883 "bdev_zone_block_delete", 00:04:40.883 "bdev_zone_block_create", 00:04:40.883 "blobfs_create", 00:04:40.883 "blobfs_detect", 00:04:40.883 "blobfs_set_cache_size", 00:04:40.883 "bdev_aio_delete", 00:04:40.883 "bdev_aio_rescan", 00:04:40.883 "bdev_aio_create", 00:04:40.883 "bdev_ftl_set_property", 00:04:40.883 "bdev_ftl_get_properties", 00:04:40.883 "bdev_ftl_get_stats", 00:04:40.883 "bdev_ftl_unmap", 00:04:40.883 "bdev_ftl_unload", 00:04:40.883 "bdev_ftl_delete", 00:04:40.883 "bdev_ftl_load", 00:04:40.883 "bdev_ftl_create", 00:04:40.883 "bdev_virtio_attach_controller", 00:04:40.883 "bdev_virtio_scsi_get_devices", 00:04:40.883 "bdev_virtio_detach_controller", 00:04:40.883 "bdev_virtio_blk_set_hotplug", 00:04:40.883 "bdev_iscsi_delete", 00:04:40.883 "bdev_iscsi_create", 00:04:40.883 "bdev_iscsi_set_options", 00:04:40.883 "bdev_uring_delete", 00:04:40.883 "bdev_uring_rescan", 00:04:40.883 "bdev_uring_create", 00:04:40.883 "accel_error_inject_error", 00:04:40.883 "ioat_scan_accel_module", 00:04:40.883 "dsa_scan_accel_module", 00:04:40.883 "iaa_scan_accel_module", 00:04:40.883 "keyring_file_remove_key", 00:04:40.883 "keyring_file_add_key", 00:04:40.883 "keyring_linux_set_options", 00:04:40.883 "fsdev_aio_delete", 00:04:40.883 "fsdev_aio_create", 00:04:40.883 "iscsi_get_histogram", 00:04:40.883 "iscsi_enable_histogram", 00:04:40.883 "iscsi_set_options", 00:04:40.883 "iscsi_get_auth_groups", 00:04:40.883 "iscsi_auth_group_remove_secret", 00:04:40.883 "iscsi_auth_group_add_secret", 00:04:40.883 "iscsi_delete_auth_group", 00:04:40.883 "iscsi_create_auth_group", 00:04:40.883 "iscsi_set_discovery_auth", 00:04:40.883 "iscsi_get_options", 00:04:40.883 "iscsi_target_node_request_logout", 00:04:40.883 "iscsi_target_node_set_redirect", 00:04:40.883 "iscsi_target_node_set_auth", 00:04:40.883 "iscsi_target_node_add_lun", 00:04:40.883 "iscsi_get_stats", 00:04:40.883 "iscsi_get_connections", 00:04:40.883 "iscsi_portal_group_set_auth", 00:04:40.883 "iscsi_start_portal_group", 00:04:40.883 "iscsi_delete_portal_group", 00:04:40.883 "iscsi_create_portal_group", 00:04:40.883 "iscsi_get_portal_groups", 00:04:40.883 "iscsi_delete_target_node", 00:04:40.883 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.883 "iscsi_target_node_add_pg_ig_maps", 00:04:40.883 "iscsi_create_target_node", 00:04:40.883 "iscsi_get_target_nodes", 00:04:40.883 "iscsi_delete_initiator_group", 00:04:40.883 "iscsi_initiator_group_remove_initiators", 00:04:40.883 "iscsi_initiator_group_add_initiators", 00:04:40.883 "iscsi_create_initiator_group", 00:04:40.883 "iscsi_get_initiator_groups", 00:04:40.883 "nvmf_set_crdt", 00:04:40.883 "nvmf_set_config", 00:04:40.883 "nvmf_set_max_subsystems", 00:04:40.883 "nvmf_stop_mdns_prr", 00:04:40.883 "nvmf_publish_mdns_prr", 00:04:40.883 "nvmf_subsystem_get_listeners", 00:04:40.883 "nvmf_subsystem_get_qpairs", 00:04:40.883 "nvmf_subsystem_get_controllers", 00:04:40.883 "nvmf_get_stats", 00:04:40.883 "nvmf_get_transports", 00:04:40.883 "nvmf_create_transport", 00:04:40.883 "nvmf_get_targets", 00:04:40.883 "nvmf_delete_target", 00:04:40.883 "nvmf_create_target", 00:04:40.883 "nvmf_subsystem_allow_any_host", 00:04:40.883 "nvmf_subsystem_set_keys", 00:04:40.883 "nvmf_subsystem_remove_host", 00:04:40.883 "nvmf_subsystem_add_host", 00:04:40.883 "nvmf_ns_remove_host", 00:04:40.883 "nvmf_ns_add_host", 00:04:40.883 "nvmf_subsystem_remove_ns", 00:04:40.883 "nvmf_subsystem_set_ns_ana_group", 00:04:40.883 "nvmf_subsystem_add_ns", 00:04:40.883 "nvmf_subsystem_listener_set_ana_state", 00:04:40.883 "nvmf_discovery_get_referrals", 00:04:40.883 "nvmf_discovery_remove_referral", 00:04:40.883 "nvmf_discovery_add_referral", 00:04:40.883 "nvmf_subsystem_remove_listener", 00:04:40.883 "nvmf_subsystem_add_listener", 00:04:40.883 "nvmf_delete_subsystem", 00:04:40.883 "nvmf_create_subsystem", 00:04:40.883 "nvmf_get_subsystems", 00:04:40.883 "env_dpdk_get_mem_stats", 00:04:40.883 "nbd_get_disks", 00:04:40.883 "nbd_stop_disk", 00:04:40.883 "nbd_start_disk", 00:04:40.883 "ublk_recover_disk", 00:04:40.883 "ublk_get_disks", 00:04:40.883 "ublk_stop_disk", 00:04:40.883 "ublk_start_disk", 00:04:40.883 "ublk_destroy_target", 00:04:40.883 "ublk_create_target", 00:04:40.883 "virtio_blk_create_transport", 00:04:40.883 "virtio_blk_get_transports", 00:04:40.883 "vhost_controller_set_coalescing", 00:04:40.883 "vhost_get_controllers", 00:04:40.883 "vhost_delete_controller", 00:04:40.883 "vhost_create_blk_controller", 00:04:40.883 "vhost_scsi_controller_remove_target", 00:04:40.883 "vhost_scsi_controller_add_target", 00:04:40.883 "vhost_start_scsi_controller", 00:04:40.883 "vhost_create_scsi_controller", 00:04:40.883 "thread_set_cpumask", 00:04:40.883 "scheduler_set_options", 00:04:40.883 "framework_get_governor", 00:04:40.883 "framework_get_scheduler", 00:04:40.883 "framework_set_scheduler", 00:04:40.883 "framework_get_reactors", 00:04:40.883 "thread_get_io_channels", 00:04:40.883 "thread_get_pollers", 00:04:40.883 "thread_get_stats", 00:04:40.883 "framework_monitor_context_switch", 00:04:40.883 "spdk_kill_instance", 00:04:40.883 "log_enable_timestamps", 00:04:40.883 "log_get_flags", 00:04:40.883 "log_clear_flag", 00:04:40.883 "log_set_flag", 00:04:40.883 "log_get_level", 00:04:40.883 "log_set_level", 00:04:40.883 "log_get_print_level", 00:04:40.883 "log_set_print_level", 00:04:40.883 "framework_enable_cpumask_locks", 00:04:40.883 "framework_disable_cpumask_locks", 00:04:40.883 "framework_wait_init", 00:04:40.883 "framework_start_init", 00:04:40.883 "scsi_get_devices", 00:04:40.883 "bdev_get_histogram", 00:04:40.883 "bdev_enable_histogram", 00:04:40.883 "bdev_set_qos_limit", 00:04:40.883 "bdev_set_qd_sampling_period", 00:04:40.883 "bdev_get_bdevs", 00:04:40.883 "bdev_reset_iostat", 00:04:40.883 "bdev_get_iostat", 00:04:40.883 "bdev_examine", 00:04:40.883 "bdev_wait_for_examine", 00:04:40.883 "bdev_set_options", 00:04:40.883 "accel_get_stats", 00:04:40.883 "accel_set_options", 00:04:40.883 "accel_set_driver", 00:04:40.883 "accel_crypto_key_destroy", 00:04:40.883 "accel_crypto_keys_get", 00:04:40.883 "accel_crypto_key_create", 00:04:40.883 "accel_assign_opc", 00:04:40.883 "accel_get_module_info", 00:04:40.883 "accel_get_opc_assignments", 00:04:40.883 "vmd_rescan", 00:04:40.883 "vmd_remove_device", 00:04:40.883 "vmd_enable", 00:04:40.883 "sock_get_default_impl", 00:04:40.883 "sock_set_default_impl", 00:04:40.883 "sock_impl_set_options", 00:04:40.883 "sock_impl_get_options", 00:04:40.883 "iobuf_get_stats", 00:04:40.883 "iobuf_set_options", 00:04:40.883 "keyring_get_keys", 00:04:40.883 "framework_get_pci_devices", 00:04:40.883 "framework_get_config", 00:04:40.883 "framework_get_subsystems", 00:04:40.883 "fsdev_set_opts", 00:04:40.883 "fsdev_get_opts", 00:04:40.883 "trace_get_info", 00:04:40.883 "trace_get_tpoint_group_mask", 00:04:40.883 "trace_disable_tpoint_group", 00:04:40.883 "trace_enable_tpoint_group", 00:04:40.883 "trace_clear_tpoint_mask", 00:04:40.883 "trace_set_tpoint_mask", 00:04:40.883 "notify_get_notifications", 00:04:40.883 "notify_get_types", 00:04:40.883 "spdk_get_version", 00:04:40.883 "rpc_get_methods" 00:04:40.883 ] 00:04:40.883 12:12:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.883 12:12:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.883 12:12:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57704 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57704 ']' 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57704 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57704 00:04:40.883 killing process with pid 57704 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57704' 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57704 00:04:40.883 12:12:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57704 00:04:41.142 ************************************ 00:04:41.142 END TEST spdkcli_tcp 00:04:41.142 ************************************ 00:04:41.142 00:04:41.142 real 0m1.154s 00:04:41.142 user 0m1.986s 00:04:41.142 sys 0m0.355s 00:04:41.142 12:12:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.142 12:12:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.142 12:12:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.142 12:12:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.142 12:12:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.142 12:12:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.142 ************************************ 00:04:41.142 START TEST dpdk_mem_utility 00:04:41.142 ************************************ 00:04:41.143 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.402 * Looking for test storage... 00:04:41.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.402 12:12:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.402 --rc genhtml_branch_coverage=1 00:04:41.402 --rc genhtml_function_coverage=1 00:04:41.402 --rc genhtml_legend=1 00:04:41.402 --rc geninfo_all_blocks=1 00:04:41.402 --rc geninfo_unexecuted_blocks=1 00:04:41.402 00:04:41.402 ' 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.402 --rc genhtml_branch_coverage=1 00:04:41.402 --rc genhtml_function_coverage=1 00:04:41.402 --rc genhtml_legend=1 00:04:41.402 --rc geninfo_all_blocks=1 00:04:41.402 --rc geninfo_unexecuted_blocks=1 00:04:41.402 00:04:41.402 ' 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.402 --rc genhtml_branch_coverage=1 00:04:41.402 --rc genhtml_function_coverage=1 00:04:41.402 --rc genhtml_legend=1 00:04:41.402 --rc geninfo_all_blocks=1 00:04:41.402 --rc geninfo_unexecuted_blocks=1 00:04:41.402 00:04:41.402 ' 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.402 --rc genhtml_branch_coverage=1 00:04:41.402 --rc genhtml_function_coverage=1 00:04:41.402 --rc genhtml_legend=1 00:04:41.402 --rc geninfo_all_blocks=1 00:04:41.402 --rc geninfo_unexecuted_blocks=1 00:04:41.402 00:04:41.402 ' 00:04:41.402 12:12:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.402 12:12:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57790 00:04:41.402 12:12:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.402 12:12:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57790 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57790 ']' 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.402 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.403 12:12:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.403 [2024-12-06 12:12:28.018800] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:41.403 [2024-12-06 12:12:28.019117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57790 ] 00:04:41.662 [2024-12-06 12:12:28.165007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.662 [2024-12-06 12:12:28.192512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.662 [2024-12-06 12:12:28.229302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.600 12:12:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.600 12:12:28 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:42.600 12:12:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.600 12:12:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.600 12:12:28 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.600 12:12:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.600 { 00:04:42.600 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.600 } 00:04:42.600 12:12:28 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.600 12:12:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.600 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:42.600 1 heaps totaling size 818.000000 MiB 00:04:42.600 size: 818.000000 MiB heap id: 0 00:04:42.600 end heaps---------- 00:04:42.600 9 mempools totaling size 603.782043 MiB 00:04:42.600 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.600 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.601 size: 100.555481 MiB name: bdev_io_57790 00:04:42.601 size: 50.003479 MiB name: msgpool_57790 00:04:42.601 size: 36.509338 MiB name: fsdev_io_57790 00:04:42.601 size: 21.763794 MiB name: PDU_Pool 00:04:42.601 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.601 size: 4.133484 MiB name: evtpool_57790 00:04:42.601 size: 0.026123 MiB name: Session_Pool 00:04:42.601 end mempools------- 00:04:42.601 6 memzones totaling size 4.142822 MiB 00:04:42.601 size: 1.000366 MiB name: RG_ring_0_57790 00:04:42.601 size: 1.000366 MiB name: RG_ring_1_57790 00:04:42.601 size: 1.000366 MiB name: RG_ring_4_57790 00:04:42.601 size: 1.000366 MiB name: RG_ring_5_57790 00:04:42.601 size: 0.125366 MiB name: RG_ring_2_57790 00:04:42.601 size: 0.015991 MiB name: RG_ring_3_57790 00:04:42.601 end memzones------- 00:04:42.601 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.601 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:04:42.601 list of free elements. size: 10.802490 MiB 00:04:42.601 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:42.601 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:42.601 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:42.601 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:42.601 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:42.601 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:42.601 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:42.601 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:42.601 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:04:42.601 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:42.601 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:42.601 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:42.601 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:42.601 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:42.601 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:42.601 list of standard malloc elements. size: 199.268616 MiB 00:04:42.601 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:42.601 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:42.601 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:42.601 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:42.601 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:42.601 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.601 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:42.601 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.601 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:42.601 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:42.601 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:42.601 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:42.602 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:42.602 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:42.602 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:42.602 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:42.603 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:42.603 list of memzone associated elements. size: 607.928894 MiB 00:04:42.603 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:42.603 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.603 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:42.603 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.603 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:42.603 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57790_0 00:04:42.603 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:42.603 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57790_0 00:04:42.603 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:42.603 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57790_0 00:04:42.603 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:42.603 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.603 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:42.603 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.603 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:42.603 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57790_0 00:04:42.603 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:42.603 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57790 00:04:42.603 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.603 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57790 00:04:42.603 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:42.603 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.603 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:42.603 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.603 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:42.603 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.603 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:42.603 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.603 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:42.603 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57790 00:04:42.603 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:42.603 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57790 00:04:42.603 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:42.603 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57790 00:04:42.603 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:42.603 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57790 00:04:42.603 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:42.603 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57790 00:04:42.603 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:42.603 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57790 00:04:42.603 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:42.603 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.603 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:42.603 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.603 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:42.603 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.603 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:42.603 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57790 00:04:42.603 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:42.603 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57790 00:04:42.603 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:42.603 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.603 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:42.603 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.603 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:42.603 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57790 00:04:42.603 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:42.603 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.603 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:42.603 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57790 00:04:42.603 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:42.603 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57790 00:04:42.603 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:42.603 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57790 00:04:42.603 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:42.603 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.603 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.603 12:12:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57790 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57790 ']' 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57790 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57790 00:04:42.603 killing process with pid 57790 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57790' 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57790 00:04:42.603 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57790 00:04:42.863 ************************************ 00:04:42.863 END TEST dpdk_mem_utility 00:04:42.863 ************************************ 00:04:42.863 00:04:42.863 real 0m1.563s 00:04:42.863 user 0m1.758s 00:04:42.863 sys 0m0.321s 00:04:42.863 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.863 12:12:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.863 12:12:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.863 12:12:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.863 12:12:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.863 12:12:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.863 ************************************ 00:04:42.863 START TEST event 00:04:42.863 ************************************ 00:04:42.863 12:12:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.863 * Looking for test storage... 00:04:42.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:42.863 12:12:29 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.863 12:12:29 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.863 12:12:29 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.123 12:12:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.123 12:12:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.123 12:12:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.123 12:12:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.123 12:12:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.123 12:12:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.123 12:12:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.123 12:12:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.123 12:12:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.123 12:12:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.123 12:12:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.123 12:12:29 event -- scripts/common.sh@344 -- # case "$op" in 00:04:43.123 12:12:29 event -- scripts/common.sh@345 -- # : 1 00:04:43.123 12:12:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.123 12:12:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.123 12:12:29 event -- scripts/common.sh@365 -- # decimal 1 00:04:43.123 12:12:29 event -- scripts/common.sh@353 -- # local d=1 00:04:43.123 12:12:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.123 12:12:29 event -- scripts/common.sh@355 -- # echo 1 00:04:43.123 12:12:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.123 12:12:29 event -- scripts/common.sh@366 -- # decimal 2 00:04:43.123 12:12:29 event -- scripts/common.sh@353 -- # local d=2 00:04:43.123 12:12:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.123 12:12:29 event -- scripts/common.sh@355 -- # echo 2 00:04:43.123 12:12:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.123 12:12:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.123 12:12:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.123 12:12:29 event -- scripts/common.sh@368 -- # return 0 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.123 --rc genhtml_branch_coverage=1 00:04:43.123 --rc genhtml_function_coverage=1 00:04:43.123 --rc genhtml_legend=1 00:04:43.123 --rc geninfo_all_blocks=1 00:04:43.123 --rc geninfo_unexecuted_blocks=1 00:04:43.123 00:04:43.123 ' 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.123 --rc genhtml_branch_coverage=1 00:04:43.123 --rc genhtml_function_coverage=1 00:04:43.123 --rc genhtml_legend=1 00:04:43.123 --rc geninfo_all_blocks=1 00:04:43.123 --rc geninfo_unexecuted_blocks=1 00:04:43.123 00:04:43.123 ' 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.123 --rc genhtml_branch_coverage=1 00:04:43.123 --rc genhtml_function_coverage=1 00:04:43.123 --rc genhtml_legend=1 00:04:43.123 --rc geninfo_all_blocks=1 00:04:43.123 --rc geninfo_unexecuted_blocks=1 00:04:43.123 00:04:43.123 ' 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.123 --rc genhtml_branch_coverage=1 00:04:43.123 --rc genhtml_function_coverage=1 00:04:43.123 --rc genhtml_legend=1 00:04:43.123 --rc geninfo_all_blocks=1 00:04:43.123 --rc geninfo_unexecuted_blocks=1 00:04:43.123 00:04:43.123 ' 00:04:43.123 12:12:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:43.123 12:12:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.123 12:12:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:43.123 12:12:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.123 12:12:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.123 ************************************ 00:04:43.123 START TEST event_perf 00:04:43.123 ************************************ 00:04:43.123 12:12:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.123 Running I/O for 1 seconds...[2024-12-06 12:12:29.569322] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:43.123 [2024-12-06 12:12:29.569998] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57875 ] 00:04:43.123 [2024-12-06 12:12:29.713857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.123 [2024-12-06 12:12:29.743392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.123 [2024-12-06 12:12:29.743490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.123 [2024-12-06 12:12:29.743613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.123 [2024-12-06 12:12:29.743616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.501 Running I/O for 1 seconds... 00:04:44.501 lcore 0: 211064 00:04:44.501 lcore 1: 211063 00:04:44.501 lcore 2: 211063 00:04:44.501 lcore 3: 211064 00:04:44.501 done. 00:04:44.501 00:04:44.501 real 0m1.232s 00:04:44.501 user 0m4.072s 00:04:44.501 sys 0m0.040s 00:04:44.501 12:12:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.501 ************************************ 00:04:44.501 END TEST event_perf 00:04:44.501 ************************************ 00:04:44.501 12:12:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.501 12:12:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.501 12:12:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:44.501 12:12:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.501 12:12:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.501 ************************************ 00:04:44.501 START TEST event_reactor 00:04:44.501 ************************************ 00:04:44.501 12:12:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.501 [2024-12-06 12:12:30.849333] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:44.501 [2024-12-06 12:12:30.849418] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57908 ] 00:04:44.501 [2024-12-06 12:12:30.988510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.501 [2024-12-06 12:12:31.016271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.436 test_start 00:04:45.436 oneshot 00:04:45.436 tick 100 00:04:45.436 tick 100 00:04:45.436 tick 250 00:04:45.436 tick 100 00:04:45.436 tick 100 00:04:45.436 tick 250 00:04:45.436 tick 100 00:04:45.436 tick 500 00:04:45.436 tick 100 00:04:45.436 tick 100 00:04:45.436 tick 250 00:04:45.436 tick 100 00:04:45.436 tick 100 00:04:45.436 test_end 00:04:45.436 00:04:45.436 real 0m1.217s 00:04:45.436 user 0m1.088s 00:04:45.436 sys 0m0.025s 00:04:45.436 12:12:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.436 12:12:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:45.436 ************************************ 00:04:45.436 END TEST event_reactor 00:04:45.436 ************************************ 00:04:45.436 12:12:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.436 12:12:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:45.695 12:12:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.695 12:12:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.695 ************************************ 00:04:45.695 START TEST event_reactor_perf 00:04:45.695 ************************************ 00:04:45.695 12:12:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.695 [2024-12-06 12:12:32.121751] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:45.695 [2024-12-06 12:12:32.121819] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57938 ] 00:04:45.695 [2024-12-06 12:12:32.258736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.695 [2024-12-06 12:12:32.288514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.073 test_start 00:04:47.073 test_end 00:04:47.073 Performance: 477016 events per second 00:04:47.073 ************************************ 00:04:47.073 END TEST event_reactor_perf 00:04:47.073 ************************************ 00:04:47.073 00:04:47.073 real 0m1.222s 00:04:47.073 user 0m1.090s 00:04:47.073 sys 0m0.027s 00:04:47.073 12:12:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.073 12:12:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 12:12:33 event -- event/event.sh@49 -- # uname -s 00:04:47.073 12:12:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.073 12:12:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.073 12:12:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.073 12:12:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.073 12:12:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 ************************************ 00:04:47.073 START TEST event_scheduler 00:04:47.073 ************************************ 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.073 * Looking for test storage... 00:04:47.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.073 12:12:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.073 --rc genhtml_branch_coverage=1 00:04:47.073 --rc genhtml_function_coverage=1 00:04:47.073 --rc genhtml_legend=1 00:04:47.073 --rc geninfo_all_blocks=1 00:04:47.073 --rc geninfo_unexecuted_blocks=1 00:04:47.073 00:04:47.073 ' 00:04:47.073 12:12:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.073 --rc genhtml_branch_coverage=1 00:04:47.073 --rc genhtml_function_coverage=1 00:04:47.073 --rc genhtml_legend=1 00:04:47.074 --rc geninfo_all_blocks=1 00:04:47.074 --rc geninfo_unexecuted_blocks=1 00:04:47.074 00:04:47.074 ' 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.074 --rc genhtml_branch_coverage=1 00:04:47.074 --rc genhtml_function_coverage=1 00:04:47.074 --rc genhtml_legend=1 00:04:47.074 --rc geninfo_all_blocks=1 00:04:47.074 --rc geninfo_unexecuted_blocks=1 00:04:47.074 00:04:47.074 ' 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.074 --rc genhtml_branch_coverage=1 00:04:47.074 --rc genhtml_function_coverage=1 00:04:47.074 --rc genhtml_legend=1 00:04:47.074 --rc geninfo_all_blocks=1 00:04:47.074 --rc geninfo_unexecuted_blocks=1 00:04:47.074 00:04:47.074 ' 00:04:47.074 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.074 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58007 00:04:47.074 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.074 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.074 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58007 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58007 ']' 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.074 12:12:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.074 [2024-12-06 12:12:33.625841] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:47.074 [2024-12-06 12:12:33.626105] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58007 ] 00:04:47.333 [2024-12-06 12:12:33.777257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.333 [2024-12-06 12:12:33.822439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.333 [2024-12-06 12:12:33.822483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.333 [2024-12-06 12:12:33.822609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.333 [2024-12-06 12:12:33.822618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.333 12:12:33 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.333 12:12:33 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:47.333 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:47.333 12:12:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.333 12:12:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.333 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.333 POWER: Cannot set governor of lcore 0 to userspace 00:04:47.333 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.333 POWER: Cannot set governor of lcore 0 to performance 00:04:47.333 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.333 POWER: Cannot set governor of lcore 0 to userspace 00:04:47.333 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:47.333 POWER: Cannot set governor of lcore 0 to userspace 00:04:47.333 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:47.333 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:47.333 POWER: Unable to set Power Management Environment for lcore 0 00:04:47.333 [2024-12-06 12:12:33.901136] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:47.334 [2024-12-06 12:12:33.901282] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:47.334 [2024-12-06 12:12:33.901406] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:47.334 [2024-12-06 12:12:33.901540] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:47.334 [2024-12-06 12:12:33.901660] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:47.334 [2024-12-06 12:12:33.901713] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.334 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.334 [2024-12-06 12:12:33.940079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.334 [2024-12-06 12:12:33.963129] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.334 12:12:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.334 12:12:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.334 ************************************ 00:04:47.334 START TEST scheduler_create_thread 00:04:47.334 ************************************ 00:04:47.334 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:47.334 12:12:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:47.334 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.334 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 2 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 3 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 4 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 5 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 6 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 7 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 8 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 9 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 10 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.594 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.163 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.163 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.163 12:12:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.163 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.163 12:12:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.099 ************************************ 00:04:49.099 END TEST scheduler_create_thread 00:04:49.099 ************************************ 00:04:49.099 12:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.099 00:04:49.099 real 0m1.753s 00:04:49.099 user 0m0.020s 00:04:49.099 sys 0m0.004s 00:04:49.099 12:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.099 12:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.357 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.357 12:12:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58007 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58007 ']' 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58007 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58007 00:04:49.357 killing process with pid 58007 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58007' 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58007 00:04:49.357 12:12:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58007 00:04:49.616 [2024-12-06 12:12:36.209061] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.875 ************************************ 00:04:49.875 END TEST event_scheduler 00:04:49.875 ************************************ 00:04:49.875 00:04:49.875 real 0m2.951s 00:04:49.875 user 0m3.783s 00:04:49.875 sys 0m0.304s 00:04:49.875 12:12:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.875 12:12:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.875 12:12:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.875 12:12:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.875 12:12:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.875 12:12:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.875 12:12:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.875 ************************************ 00:04:49.875 START TEST app_repeat 00:04:49.875 ************************************ 00:04:49.875 12:12:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.875 Process app_repeat pid: 58087 00:04:49.875 spdk_app_start Round 0 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58087 00:04:49.875 12:12:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.876 12:12:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.876 12:12:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58087' 00:04:49.876 12:12:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.876 12:12:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.876 12:12:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58087 /var/tmp/spdk-nbd.sock 00:04:49.876 12:12:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58087 ']' 00:04:49.876 12:12:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.876 12:12:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.876 12:12:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.876 12:12:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.876 12:12:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.876 [2024-12-06 12:12:36.417909] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:49.876 [2024-12-06 12:12:36.417997] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58087 ] 00:04:50.133 [2024-12-06 12:12:36.559710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.133 [2024-12-06 12:12:36.587457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.133 [2024-12-06 12:12:36.587465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.133 [2024-12-06 12:12:36.615509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:51.067 12:12:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.067 12:12:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.067 12:12:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.067 Malloc0 00:04:51.067 12:12:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.325 Malloc1 00:04:51.325 12:12:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.325 12:12:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.584 /dev/nbd0 00:04:51.584 12:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.584 12:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.584 1+0 records in 00:04:51.584 1+0 records out 00:04:51.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371905 s, 11.0 MB/s 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.584 12:12:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.584 12:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.584 12:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.584 12:12:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.841 /dev/nbd1 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.099 1+0 records in 00:04:52.099 1+0 records out 00:04:52.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309445 s, 13.2 MB/s 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.099 12:12:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.099 12:12:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.368 { 00:04:52.368 "nbd_device": "/dev/nbd0", 00:04:52.368 "bdev_name": "Malloc0" 00:04:52.368 }, 00:04:52.368 { 00:04:52.368 "nbd_device": "/dev/nbd1", 00:04:52.368 "bdev_name": "Malloc1" 00:04:52.368 } 00:04:52.368 ]' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.368 { 00:04:52.368 "nbd_device": "/dev/nbd0", 00:04:52.368 "bdev_name": "Malloc0" 00:04:52.368 }, 00:04:52.368 { 00:04:52.368 "nbd_device": "/dev/nbd1", 00:04:52.368 "bdev_name": "Malloc1" 00:04:52.368 } 00:04:52.368 ]' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.368 /dev/nbd1' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.368 /dev/nbd1' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.368 256+0 records in 00:04:52.368 256+0 records out 00:04:52.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104291 s, 101 MB/s 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.368 256+0 records in 00:04:52.368 256+0 records out 00:04:52.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216989 s, 48.3 MB/s 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.368 256+0 records in 00:04:52.368 256+0 records out 00:04:52.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327347 s, 32.0 MB/s 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.368 12:12:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.651 12:12:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.926 12:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.184 12:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.184 12:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.184 12:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.443 12:12:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.443 12:12:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.702 12:12:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.702 [2024-12-06 12:12:40.285315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.702 [2024-12-06 12:12:40.310779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.702 [2024-12-06 12:12:40.310790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.702 [2024-12-06 12:12:40.337452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:53.702 [2024-12-06 12:12:40.337562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.702 [2024-12-06 12:12:40.337575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.993 12:12:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.993 spdk_app_start Round 1 00:04:56.993 12:12:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.993 12:12:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58087 /var/tmp/spdk-nbd.sock 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58087 ']' 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.993 12:12:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.993 12:12:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.253 Malloc0 00:04:57.254 12:12:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.513 Malloc1 00:04:57.513 12:12:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.513 12:12:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.771 /dev/nbd0 00:04:57.771 12:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.771 12:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.771 12:12:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.771 12:12:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.771 12:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.771 12:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.771 12:12:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.772 1+0 records in 00:04:57.772 1+0 records out 00:04:57.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258009 s, 15.9 MB/s 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.772 12:12:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.772 12:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.772 12:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.772 12:12:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.031 /dev/nbd1 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.031 1+0 records in 00:04:58.031 1+0 records out 00:04:58.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253508 s, 16.2 MB/s 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.031 12:12:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.031 12:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.290 { 00:04:58.290 "nbd_device": "/dev/nbd0", 00:04:58.290 "bdev_name": "Malloc0" 00:04:58.290 }, 00:04:58.290 { 00:04:58.290 "nbd_device": "/dev/nbd1", 00:04:58.290 "bdev_name": "Malloc1" 00:04:58.290 } 00:04:58.290 ]' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.290 { 00:04:58.290 "nbd_device": "/dev/nbd0", 00:04:58.290 "bdev_name": "Malloc0" 00:04:58.290 }, 00:04:58.290 { 00:04:58.290 "nbd_device": "/dev/nbd1", 00:04:58.290 "bdev_name": "Malloc1" 00:04:58.290 } 00:04:58.290 ]' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.290 /dev/nbd1' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.290 /dev/nbd1' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.290 256+0 records in 00:04:58.290 256+0 records out 00:04:58.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00853826 s, 123 MB/s 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.290 256+0 records in 00:04:58.290 256+0 records out 00:04:58.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208711 s, 50.2 MB/s 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.290 256+0 records in 00:04:58.290 256+0 records out 00:04:58.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307915 s, 34.1 MB/s 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.290 12:12:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.857 12:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.116 12:12:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.116 12:12:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.683 12:12:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.683 [2024-12-06 12:12:46.155697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.683 [2024-12-06 12:12:46.181189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.683 [2024-12-06 12:12:46.181196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.683 [2024-12-06 12:12:46.208290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.683 [2024-12-06 12:12:46.208398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.683 [2024-12-06 12:12:46.208412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.976 12:12:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.976 spdk_app_start Round 2 00:05:02.976 12:12:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.976 12:12:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58087 /var/tmp/spdk-nbd.sock 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58087 ']' 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.976 12:12:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.976 12:12:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.976 Malloc0 00:05:02.976 12:12:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.236 Malloc1 00:05:03.236 12:12:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.236 12:12:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.495 /dev/nbd0 00:05:03.495 12:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.495 12:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.495 1+0 records in 00:05:03.495 1+0 records out 00:05:03.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268844 s, 15.2 MB/s 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.495 12:12:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.755 /dev/nbd1 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.755 1+0 records in 00:05:03.755 1+0 records out 00:05:03.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280004 s, 14.6 MB/s 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.755 12:12:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.755 12:12:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.756 12:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.325 { 00:05:04.325 "nbd_device": "/dev/nbd0", 00:05:04.325 "bdev_name": "Malloc0" 00:05:04.325 }, 00:05:04.325 { 00:05:04.325 "nbd_device": "/dev/nbd1", 00:05:04.325 "bdev_name": "Malloc1" 00:05:04.325 } 00:05:04.325 ]' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.325 { 00:05:04.325 "nbd_device": "/dev/nbd0", 00:05:04.325 "bdev_name": "Malloc0" 00:05:04.325 }, 00:05:04.325 { 00:05:04.325 "nbd_device": "/dev/nbd1", 00:05:04.325 "bdev_name": "Malloc1" 00:05:04.325 } 00:05:04.325 ]' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.325 /dev/nbd1' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.325 /dev/nbd1' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.325 256+0 records in 00:05:04.325 256+0 records out 00:05:04.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00708832 s, 148 MB/s 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.325 256+0 records in 00:05:04.325 256+0 records out 00:05:04.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220108 s, 47.6 MB/s 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.325 256+0 records in 00:05:04.325 256+0 records out 00:05:04.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026173 s, 40.1 MB/s 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.325 12:12:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.584 12:12:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.843 12:12:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.844 12:12:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.104 12:12:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.104 12:12:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.104 12:12:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.363 12:12:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.363 12:12:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.363 12:12:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.622 [2024-12-06 12:12:52.100858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.622 [2024-12-06 12:12:52.126426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.622 [2024-12-06 12:12:52.126436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.622 [2024-12-06 12:12:52.153091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:05.622 [2024-12-06 12:12:52.153243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.622 [2024-12-06 12:12:52.153257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.913 12:12:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58087 /var/tmp/spdk-nbd.sock 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58087 ']' 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.913 12:12:55 event.app_repeat -- event/event.sh@39 -- # killprocess 58087 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58087 ']' 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58087 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58087 00:05:08.913 killing process with pid 58087 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58087' 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58087 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58087 00:05:08.913 spdk_app_start is called in Round 0. 00:05:08.913 Shutdown signal received, stop current app iteration 00:05:08.913 Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 reinitialization... 00:05:08.913 spdk_app_start is called in Round 1. 00:05:08.913 Shutdown signal received, stop current app iteration 00:05:08.913 Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 reinitialization... 00:05:08.913 spdk_app_start is called in Round 2. 00:05:08.913 Shutdown signal received, stop current app iteration 00:05:08.913 Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 reinitialization... 00:05:08.913 spdk_app_start is called in Round 3. 00:05:08.913 Shutdown signal received, stop current app iteration 00:05:08.913 12:12:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:08.913 12:12:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:08.913 00:05:08.913 real 0m19.045s 00:05:08.913 user 0m43.754s 00:05:08.913 sys 0m2.492s 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.913 12:12:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.913 ************************************ 00:05:08.913 END TEST app_repeat 00:05:08.913 ************************************ 00:05:08.913 12:12:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:08.914 12:12:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.914 12:12:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.914 12:12:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.914 12:12:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.914 ************************************ 00:05:08.914 START TEST cpu_locks 00:05:08.914 ************************************ 00:05:08.914 12:12:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.914 * Looking for test storage... 00:05:08.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.914 12:12:55 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.914 12:12:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.914 12:12:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.172 12:12:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.172 12:12:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.172 12:12:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.172 12:12:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.172 12:12:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.173 12:12:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.173 --rc genhtml_branch_coverage=1 00:05:09.173 --rc genhtml_function_coverage=1 00:05:09.173 --rc genhtml_legend=1 00:05:09.173 --rc geninfo_all_blocks=1 00:05:09.173 --rc geninfo_unexecuted_blocks=1 00:05:09.173 00:05:09.173 ' 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.173 --rc genhtml_branch_coverage=1 00:05:09.173 --rc genhtml_function_coverage=1 00:05:09.173 --rc genhtml_legend=1 00:05:09.173 --rc geninfo_all_blocks=1 00:05:09.173 --rc geninfo_unexecuted_blocks=1 00:05:09.173 00:05:09.173 ' 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.173 --rc genhtml_branch_coverage=1 00:05:09.173 --rc genhtml_function_coverage=1 00:05:09.173 --rc genhtml_legend=1 00:05:09.173 --rc geninfo_all_blocks=1 00:05:09.173 --rc geninfo_unexecuted_blocks=1 00:05:09.173 00:05:09.173 ' 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.173 --rc genhtml_branch_coverage=1 00:05:09.173 --rc genhtml_function_coverage=1 00:05:09.173 --rc genhtml_legend=1 00:05:09.173 --rc geninfo_all_blocks=1 00:05:09.173 --rc geninfo_unexecuted_blocks=1 00:05:09.173 00:05:09.173 ' 00:05:09.173 12:12:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.173 12:12:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.173 12:12:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.173 12:12:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.173 12:12:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.173 ************************************ 00:05:09.173 START TEST default_locks 00:05:09.173 ************************************ 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58529 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58529 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58529 ']' 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.173 12:12:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.173 [2024-12-06 12:12:55.698072] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:09.173 [2024-12-06 12:12:55.698162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58529 ] 00:05:09.432 [2024-12-06 12:12:55.837450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.432 [2024-12-06 12:12:55.864567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.432 [2024-12-06 12:12:55.900250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.432 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.432 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:09.432 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58529 00:05:09.432 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58529 00:05:09.432 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58529 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58529 ']' 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58529 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58529 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.690 killing process with pid 58529 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58529' 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58529 00:05:09.690 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58529 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58529 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58529 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58529 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58529 ']' 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.949 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58529) - No such process 00:05:09.949 ERROR: process (pid: 58529) is no longer running 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.949 00:05:09.949 real 0m0.906s 00:05:09.949 user 0m0.972s 00:05:09.949 sys 0m0.338s 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.949 12:12:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.949 ************************************ 00:05:09.949 END TEST default_locks 00:05:09.949 ************************************ 00:05:09.949 12:12:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.949 12:12:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.949 12:12:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.949 12:12:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.949 ************************************ 00:05:09.949 START TEST default_locks_via_rpc 00:05:09.949 ************************************ 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58568 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58568 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58568 ']' 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.949 12:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.208 [2024-12-06 12:12:56.671436] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:10.208 [2024-12-06 12:12:56.671558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58568 ] 00:05:10.208 [2024-12-06 12:12:56.818237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.208 [2024-12-06 12:12:56.845011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.467 [2024-12-06 12:12:56.880436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.052 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58568 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58568 00:05:11.053 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58568 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58568 ']' 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58568 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58568 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.312 killing process with pid 58568 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58568' 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58568 00:05:11.312 12:12:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58568 00:05:11.571 00:05:11.571 real 0m1.534s 00:05:11.571 user 0m1.752s 00:05:11.571 sys 0m0.360s 00:05:11.571 ************************************ 00:05:11.571 END TEST default_locks_via_rpc 00:05:11.571 ************************************ 00:05:11.571 12:12:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.571 12:12:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.571 12:12:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:11.571 12:12:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.571 12:12:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.571 12:12:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.571 ************************************ 00:05:11.571 START TEST non_locking_app_on_locked_coremask 00:05:11.571 ************************************ 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58614 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58614 /var/tmp/spdk.sock 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58614 ']' 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.571 12:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.830 [2024-12-06 12:12:58.255127] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:11.830 [2024-12-06 12:12:58.255255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58614 ] 00:05:11.830 [2024-12-06 12:12:58.394754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.830 [2024-12-06 12:12:58.421671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.830 [2024-12-06 12:12:58.456297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58630 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58630 /var/tmp/spdk2.sock 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58630 ']' 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.764 12:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.764 [2024-12-06 12:12:59.282017] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:12.764 [2024-12-06 12:12:59.282116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58630 ] 00:05:13.022 [2024-12-06 12:12:59.436837] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.022 [2024-12-06 12:12:59.436885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.022 [2024-12-06 12:12:59.492390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.022 [2024-12-06 12:12:59.561484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.589 12:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.589 12:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.589 12:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58614 00:05:13.589 12:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58614 00:05:13.589 12:13:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58614 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58614 ']' 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58614 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58614 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.527 killing process with pid 58614 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58614' 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58614 00:05:14.527 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58614 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58630 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58630 ']' 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58630 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58630 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.096 killing process with pid 58630 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58630' 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58630 00:05:15.096 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58630 00:05:15.355 00:05:15.355 real 0m3.586s 00:05:15.355 user 0m4.252s 00:05:15.355 sys 0m0.896s 00:05:15.355 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.355 12:13:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.355 ************************************ 00:05:15.355 END TEST non_locking_app_on_locked_coremask 00:05:15.355 ************************************ 00:05:15.355 12:13:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.355 12:13:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.355 12:13:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.355 12:13:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.355 ************************************ 00:05:15.355 START TEST locking_app_on_unlocked_coremask 00:05:15.355 ************************************ 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58691 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58691 /var/tmp/spdk.sock 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58691 ']' 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.355 12:13:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.355 [2024-12-06 12:13:01.872263] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:15.355 [2024-12-06 12:13:01.872358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58691 ] 00:05:15.355 [2024-12-06 12:13:02.008466] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.355 [2024-12-06 12:13:02.008513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.614 [2024-12-06 12:13:02.037995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.614 [2024-12-06 12:13:02.073403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58694 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58694 /var/tmp/spdk2.sock 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58694 ']' 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.614 12:13:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.614 [2024-12-06 12:13:02.255753] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:15.615 [2024-12-06 12:13:02.255853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:05:15.874 [2024-12-06 12:13:02.409852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.874 [2024-12-06 12:13:02.463854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.134 [2024-12-06 12:13:02.533939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.702 12:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.702 12:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.702 12:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58694 00:05:16.702 12:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58694 00:05:16.702 12:13:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58691 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58691 ']' 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58691 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58691 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.640 killing process with pid 58691 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58691' 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58691 00:05:17.640 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58691 00:05:17.899 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58694 00:05:17.899 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58694 ']' 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58694 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58694 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.900 killing process with pid 58694 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58694' 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58694 00:05:17.900 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58694 00:05:18.159 00:05:18.159 real 0m2.914s 00:05:18.159 user 0m3.468s 00:05:18.159 sys 0m0.833s 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.159 ************************************ 00:05:18.159 END TEST locking_app_on_unlocked_coremask 00:05:18.159 ************************************ 00:05:18.159 12:13:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.159 12:13:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.159 12:13:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.159 12:13:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.159 ************************************ 00:05:18.159 START TEST locking_app_on_locked_coremask 00:05:18.159 ************************************ 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58761 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58761 /var/tmp/spdk.sock 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58761 ']' 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.159 12:13:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.419 [2024-12-06 12:13:04.890335] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:18.419 [2024-12-06 12:13:04.890474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58761 ] 00:05:18.419 [2024-12-06 12:13:05.044594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.419 [2024-12-06 12:13:05.073406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.677 [2024-12-06 12:13:05.109531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58764 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58764 /var/tmp/spdk2.sock 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58764 /var/tmp/spdk2.sock 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.677 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58764 /var/tmp/spdk2.sock 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58764 ']' 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.678 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.678 [2024-12-06 12:13:05.286056] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:18.678 [2024-12-06 12:13:05.286155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58764 ] 00:05:18.936 [2024-12-06 12:13:05.438142] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58761 has claimed it. 00:05:18.936 [2024-12-06 12:13:05.438210] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.503 ERROR: process (pid: 58764) is no longer running 00:05:19.503 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58764) - No such process 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58761 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58761 00:05:19.503 12:13:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58761 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58761 ']' 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58761 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58761 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.763 killing process with pid 58761 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58761' 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58761 00:05:19.763 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58761 00:05:20.027 00:05:20.027 real 0m1.845s 00:05:20.027 user 0m2.185s 00:05:20.027 sys 0m0.530s 00:05:20.027 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.027 12:13:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.027 ************************************ 00:05:20.027 END TEST locking_app_on_locked_coremask 00:05:20.027 ************************************ 00:05:20.027 12:13:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.027 12:13:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.027 12:13:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.027 12:13:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.286 ************************************ 00:05:20.286 START TEST locking_overlapped_coremask 00:05:20.286 ************************************ 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58815 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58815 /var/tmp/spdk.sock 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58815 ']' 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.286 12:13:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.286 [2024-12-06 12:13:06.742127] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:20.286 [2024-12-06 12:13:06.742227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58815 ] 00:05:20.286 [2024-12-06 12:13:06.877676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.286 [2024-12-06 12:13:06.908798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.286 [2024-12-06 12:13:06.908978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.286 [2024-12-06 12:13:06.908983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.548 [2024-12-06 12:13:06.946596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58820 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58820 /var/tmp/spdk2.sock 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58820 /var/tmp/spdk2.sock 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58820 /var/tmp/spdk2.sock 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58820 ']' 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.548 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.548 [2024-12-06 12:13:07.136799] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:20.548 [2024-12-06 12:13:07.136920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58820 ] 00:05:20.847 [2024-12-06 12:13:07.297323] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58815 has claimed it. 00:05:20.847 [2024-12-06 12:13:07.297386] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.428 ERROR: process (pid: 58820) is no longer running 00:05:21.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58820) - No such process 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58815 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58815 ']' 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58815 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58815 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.428 killing process with pid 58815 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58815' 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58815 00:05:21.428 12:13:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58815 00:05:21.428 00:05:21.428 real 0m1.375s 00:05:21.428 user 0m3.839s 00:05:21.428 sys 0m0.263s 00:05:21.428 12:13:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.428 12:13:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.428 ************************************ 00:05:21.428 END TEST locking_overlapped_coremask 00:05:21.428 ************************************ 00:05:21.688 12:13:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:21.688 12:13:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.688 12:13:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.688 12:13:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.688 ************************************ 00:05:21.688 START TEST locking_overlapped_coremask_via_rpc 00:05:21.688 ************************************ 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58860 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58860 /var/tmp/spdk.sock 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58860 ']' 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.688 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.689 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.689 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.689 12:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.689 [2024-12-06 12:13:08.166937] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:21.689 [2024-12-06 12:13:08.167019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58860 ] 00:05:21.689 [2024-12-06 12:13:08.303156] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.689 [2024-12-06 12:13:08.303207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.689 [2024-12-06 12:13:08.332676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.689 [2024-12-06 12:13:08.332831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.689 [2024-12-06 12:13:08.332834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.948 [2024-12-06 12:13:08.373774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.517 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.517 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58878 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58878 /var/tmp/spdk2.sock 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58878 ']' 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.518 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.518 [2024-12-06 12:13:09.152021] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:22.518 [2024-12-06 12:13:09.152123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:05:22.777 [2024-12-06 12:13:09.306250] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.777 [2024-12-06 12:13:09.306301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.777 [2024-12-06 12:13:09.368326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.777 [2024-12-06 12:13:09.372341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.777 [2024-12-06 12:13:09.372341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:23.037 [2024-12-06 12:13:09.449487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.037 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.037 [2024-12-06 12:13:09.686346] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58860 has claimed it. 00:05:23.296 request: 00:05:23.296 { 00:05:23.297 "method": "framework_enable_cpumask_locks", 00:05:23.297 "req_id": 1 00:05:23.297 } 00:05:23.297 Got JSON-RPC error response 00:05:23.297 response: 00:05:23.297 { 00:05:23.297 "code": -32603, 00:05:23.297 "message": "Failed to claim CPU core: 2" 00:05:23.297 } 00:05:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58860 /var/tmp/spdk.sock 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58860 ']' 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58878 /var/tmp/spdk2.sock 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58878 ']' 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.297 12:13:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.865 ************************************ 00:05:23.865 END TEST locking_overlapped_coremask_via_rpc 00:05:23.865 ************************************ 00:05:23.865 00:05:23.865 real 0m2.125s 00:05:23.865 user 0m1.194s 00:05:23.865 sys 0m0.119s 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.865 12:13:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.865 12:13:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:23.865 12:13:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58860 ]] 00:05:23.865 12:13:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58860 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58860 ']' 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58860 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58860 00:05:23.865 killing process with pid 58860 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58860' 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58860 00:05:23.865 12:13:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58860 00:05:24.125 12:13:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58878 ]] 00:05:24.125 12:13:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58878 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58878 ']' 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58878 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58878 00:05:24.125 killing process with pid 58878 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58878' 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58878 00:05:24.125 12:13:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58878 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58860 ]] 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58860 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58860 ']' 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58860 00:05:24.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58860) - No such process 00:05:24.385 Process with pid 58860 is not found 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58860 is not found' 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58878 ]] 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58878 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58878 ']' 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58878 00:05:24.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58878) - No such process 00:05:24.385 Process with pid 58878 is not found 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58878 is not found' 00:05:24.385 12:13:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:24.385 00:05:24.385 real 0m15.339s 00:05:24.385 user 0m27.317s 00:05:24.385 sys 0m3.961s 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.385 ************************************ 00:05:24.385 END TEST cpu_locks 00:05:24.385 ************************************ 00:05:24.385 12:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.385 ************************************ 00:05:24.385 END TEST event 00:05:24.385 ************************************ 00:05:24.385 00:05:24.385 real 0m41.479s 00:05:24.385 user 1m21.312s 00:05:24.385 sys 0m7.089s 00:05:24.385 12:13:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.385 12:13:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.385 12:13:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:24.385 12:13:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.385 12:13:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.385 12:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:24.385 ************************************ 00:05:24.385 START TEST thread 00:05:24.385 ************************************ 00:05:24.385 12:13:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:24.385 * Looking for test storage... 00:05:24.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:24.385 12:13:10 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.385 12:13:10 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.385 12:13:10 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.385 12:13:11 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.385 12:13:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.386 12:13:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.386 12:13:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.386 12:13:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.386 12:13:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.386 12:13:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.386 12:13:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.386 12:13:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.386 12:13:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.386 12:13:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.386 12:13:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.386 12:13:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:24.386 12:13:11 thread -- scripts/common.sh@345 -- # : 1 00:05:24.386 12:13:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.386 12:13:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.386 12:13:11 thread -- scripts/common.sh@365 -- # decimal 1 00:05:24.386 12:13:11 thread -- scripts/common.sh@353 -- # local d=1 00:05:24.386 12:13:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.386 12:13:11 thread -- scripts/common.sh@355 -- # echo 1 00:05:24.386 12:13:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.386 12:13:11 thread -- scripts/common.sh@366 -- # decimal 2 00:05:24.386 12:13:11 thread -- scripts/common.sh@353 -- # local d=2 00:05:24.386 12:13:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.386 12:13:11 thread -- scripts/common.sh@355 -- # echo 2 00:05:24.644 12:13:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.644 12:13:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.644 12:13:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.644 12:13:11 thread -- scripts/common.sh@368 -- # return 0 00:05:24.644 12:13:11 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.644 12:13:11 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.644 --rc genhtml_branch_coverage=1 00:05:24.644 --rc genhtml_function_coverage=1 00:05:24.644 --rc genhtml_legend=1 00:05:24.644 --rc geninfo_all_blocks=1 00:05:24.644 --rc geninfo_unexecuted_blocks=1 00:05:24.644 00:05:24.644 ' 00:05:24.644 12:13:11 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.644 --rc genhtml_branch_coverage=1 00:05:24.644 --rc genhtml_function_coverage=1 00:05:24.644 --rc genhtml_legend=1 00:05:24.644 --rc geninfo_all_blocks=1 00:05:24.645 --rc geninfo_unexecuted_blocks=1 00:05:24.645 00:05:24.645 ' 00:05:24.645 12:13:11 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.645 --rc genhtml_branch_coverage=1 00:05:24.645 --rc genhtml_function_coverage=1 00:05:24.645 --rc genhtml_legend=1 00:05:24.645 --rc geninfo_all_blocks=1 00:05:24.645 --rc geninfo_unexecuted_blocks=1 00:05:24.645 00:05:24.645 ' 00:05:24.645 12:13:11 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.645 --rc genhtml_branch_coverage=1 00:05:24.645 --rc genhtml_function_coverage=1 00:05:24.645 --rc genhtml_legend=1 00:05:24.645 --rc geninfo_all_blocks=1 00:05:24.645 --rc geninfo_unexecuted_blocks=1 00:05:24.645 00:05:24.645 ' 00:05:24.645 12:13:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:24.645 12:13:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:24.645 12:13:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.645 12:13:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.645 ************************************ 00:05:24.645 START TEST thread_poller_perf 00:05:24.645 ************************************ 00:05:24.645 12:13:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:24.645 [2024-12-06 12:13:11.073905] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:24.645 [2024-12-06 12:13:11.074119] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:05:24.645 [2024-12-06 12:13:11.212930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.645 [2024-12-06 12:13:11.239477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.645 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.023 [2024-12-06T12:13:12.681Z] ====================================== 00:05:26.023 [2024-12-06T12:13:12.681Z] busy:2207173540 (cyc) 00:05:26.023 [2024-12-06T12:13:12.681Z] total_run_count: 399000 00:05:26.023 [2024-12-06T12:13:12.681Z] tsc_hz: 2200000000 (cyc) 00:05:26.023 [2024-12-06T12:13:12.681Z] ====================================== 00:05:26.023 [2024-12-06T12:13:12.681Z] poller_cost: 5531 (cyc), 2514 (nsec) 00:05:26.023 00:05:26.023 real 0m1.225s 00:05:26.023 user 0m1.089s 00:05:26.023 sys 0m0.029s 00:05:26.023 12:13:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.023 ************************************ 00:05:26.023 END TEST thread_poller_perf 00:05:26.023 ************************************ 00:05:26.023 12:13:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.023 12:13:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.023 12:13:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:26.023 12:13:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.023 12:13:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.023 ************************************ 00:05:26.023 START TEST thread_poller_perf 00:05:26.023 ************************************ 00:05:26.023 12:13:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.023 [2024-12-06 12:13:12.350479] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:26.023 [2024-12-06 12:13:12.350587] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:05:26.023 [2024-12-06 12:13:12.492696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.023 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:26.023 [2024-12-06 12:13:12.519527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.960 [2024-12-06T12:13:13.618Z] ====================================== 00:05:26.960 [2024-12-06T12:13:13.618Z] busy:2201639724 (cyc) 00:05:26.960 [2024-12-06T12:13:13.618Z] total_run_count: 4782000 00:05:26.960 [2024-12-06T12:13:13.618Z] tsc_hz: 2200000000 (cyc) 00:05:26.961 [2024-12-06T12:13:13.619Z] ====================================== 00:05:26.961 [2024-12-06T12:13:13.619Z] poller_cost: 460 (cyc), 209 (nsec) 00:05:26.961 ************************************ 00:05:26.961 END TEST thread_poller_perf 00:05:26.961 ************************************ 00:05:26.961 00:05:26.961 real 0m1.221s 00:05:26.961 user 0m1.082s 00:05:26.961 sys 0m0.033s 00:05:26.961 12:13:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.961 12:13:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.961 12:13:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:26.961 ************************************ 00:05:26.961 END TEST thread 00:05:26.961 ************************************ 00:05:26.961 00:05:26.961 real 0m2.707s 00:05:26.961 user 0m2.294s 00:05:26.961 sys 0m0.200s 00:05:26.961 12:13:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.961 12:13:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.220 12:13:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:27.220 12:13:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:27.220 12:13:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.220 12:13:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.220 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:27.220 ************************************ 00:05:27.220 START TEST app_cmdline 00:05:27.220 ************************************ 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:27.220 * Looking for test storage... 00:05:27.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.220 12:13:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.220 --rc genhtml_branch_coverage=1 00:05:27.220 --rc genhtml_function_coverage=1 00:05:27.220 --rc genhtml_legend=1 00:05:27.220 --rc geninfo_all_blocks=1 00:05:27.220 --rc geninfo_unexecuted_blocks=1 00:05:27.220 00:05:27.220 ' 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.220 --rc genhtml_branch_coverage=1 00:05:27.220 --rc genhtml_function_coverage=1 00:05:27.220 --rc genhtml_legend=1 00:05:27.220 --rc geninfo_all_blocks=1 00:05:27.220 --rc geninfo_unexecuted_blocks=1 00:05:27.220 00:05:27.220 ' 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.220 --rc genhtml_branch_coverage=1 00:05:27.220 --rc genhtml_function_coverage=1 00:05:27.220 --rc genhtml_legend=1 00:05:27.220 --rc geninfo_all_blocks=1 00:05:27.220 --rc geninfo_unexecuted_blocks=1 00:05:27.220 00:05:27.220 ' 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.220 --rc genhtml_branch_coverage=1 00:05:27.220 --rc genhtml_function_coverage=1 00:05:27.220 --rc genhtml_legend=1 00:05:27.220 --rc geninfo_all_blocks=1 00:05:27.220 --rc geninfo_unexecuted_blocks=1 00:05:27.220 00:05:27.220 ' 00:05:27.220 12:13:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:27.220 12:13:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59118 00:05:27.220 12:13:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59118 00:05:27.220 12:13:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59118 ']' 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.220 12:13:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.480 [2024-12-06 12:13:13.878877] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:27.480 [2024-12-06 12:13:13.879125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:05:27.480 [2024-12-06 12:13:14.016667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.480 [2024-12-06 12:13:14.043994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.480 [2024-12-06 12:13:14.079421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.739 12:13:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.739 12:13:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:27.739 12:13:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:27.998 { 00:05:27.998 "version": "SPDK v25.01-pre git sha1 b82e5bf03", 00:05:27.998 "fields": { 00:05:27.999 "major": 25, 00:05:27.999 "minor": 1, 00:05:27.999 "patch": 0, 00:05:27.999 "suffix": "-pre", 00:05:27.999 "commit": "b82e5bf03" 00:05:27.999 } 00:05:27.999 } 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:27.999 12:13:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:27.999 12:13:14 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:28.258 request: 00:05:28.258 { 00:05:28.258 "method": "env_dpdk_get_mem_stats", 00:05:28.258 "req_id": 1 00:05:28.258 } 00:05:28.258 Got JSON-RPC error response 00:05:28.258 response: 00:05:28.258 { 00:05:28.258 "code": -32601, 00:05:28.258 "message": "Method not found" 00:05:28.258 } 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.258 12:13:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59118 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59118 ']' 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59118 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59118 00:05:28.258 killing process with pid 59118 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59118' 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@973 -- # kill 59118 00:05:28.258 12:13:14 app_cmdline -- common/autotest_common.sh@978 -- # wait 59118 00:05:28.517 ************************************ 00:05:28.518 END TEST app_cmdline 00:05:28.518 ************************************ 00:05:28.518 00:05:28.518 real 0m1.439s 00:05:28.518 user 0m1.958s 00:05:28.518 sys 0m0.329s 00:05:28.518 12:13:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.518 12:13:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.518 12:13:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:28.518 12:13:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.518 12:13:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.518 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:28.518 ************************************ 00:05:28.518 START TEST version 00:05:28.518 ************************************ 00:05:28.518 12:13:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:28.776 * Looking for test storage... 00:05:28.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.776 12:13:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.776 12:13:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.776 12:13:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.776 12:13:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.776 12:13:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.776 12:13:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.776 12:13:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.776 12:13:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.776 12:13:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.776 12:13:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.776 12:13:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.776 12:13:15 version -- scripts/common.sh@344 -- # case "$op" in 00:05:28.776 12:13:15 version -- scripts/common.sh@345 -- # : 1 00:05:28.776 12:13:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.776 12:13:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.776 12:13:15 version -- scripts/common.sh@365 -- # decimal 1 00:05:28.776 12:13:15 version -- scripts/common.sh@353 -- # local d=1 00:05:28.776 12:13:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.776 12:13:15 version -- scripts/common.sh@355 -- # echo 1 00:05:28.776 12:13:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.776 12:13:15 version -- scripts/common.sh@366 -- # decimal 2 00:05:28.776 12:13:15 version -- scripts/common.sh@353 -- # local d=2 00:05:28.776 12:13:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.776 12:13:15 version -- scripts/common.sh@355 -- # echo 2 00:05:28.776 12:13:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.776 12:13:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.776 12:13:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.776 12:13:15 version -- scripts/common.sh@368 -- # return 0 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.776 --rc genhtml_branch_coverage=1 00:05:28.776 --rc genhtml_function_coverage=1 00:05:28.776 --rc genhtml_legend=1 00:05:28.776 --rc geninfo_all_blocks=1 00:05:28.776 --rc geninfo_unexecuted_blocks=1 00:05:28.776 00:05:28.776 ' 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.776 --rc genhtml_branch_coverage=1 00:05:28.776 --rc genhtml_function_coverage=1 00:05:28.776 --rc genhtml_legend=1 00:05:28.776 --rc geninfo_all_blocks=1 00:05:28.776 --rc geninfo_unexecuted_blocks=1 00:05:28.776 00:05:28.776 ' 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.776 --rc genhtml_branch_coverage=1 00:05:28.776 --rc genhtml_function_coverage=1 00:05:28.776 --rc genhtml_legend=1 00:05:28.776 --rc geninfo_all_blocks=1 00:05:28.776 --rc geninfo_unexecuted_blocks=1 00:05:28.776 00:05:28.776 ' 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.776 --rc genhtml_branch_coverage=1 00:05:28.776 --rc genhtml_function_coverage=1 00:05:28.776 --rc genhtml_legend=1 00:05:28.776 --rc geninfo_all_blocks=1 00:05:28.776 --rc geninfo_unexecuted_blocks=1 00:05:28.776 00:05:28.776 ' 00:05:28.776 12:13:15 version -- app/version.sh@17 -- # get_header_version major 00:05:28.776 12:13:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # cut -f2 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.776 12:13:15 version -- app/version.sh@17 -- # major=25 00:05:28.776 12:13:15 version -- app/version.sh@18 -- # get_header_version minor 00:05:28.776 12:13:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # cut -f2 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.776 12:13:15 version -- app/version.sh@18 -- # minor=1 00:05:28.776 12:13:15 version -- app/version.sh@19 -- # get_header_version patch 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # cut -f2 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.776 12:13:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.776 12:13:15 version -- app/version.sh@19 -- # patch=0 00:05:28.776 12:13:15 version -- app/version.sh@20 -- # get_header_version suffix 00:05:28.776 12:13:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # cut -f2 00:05:28.776 12:13:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:28.776 12:13:15 version -- app/version.sh@20 -- # suffix=-pre 00:05:28.776 12:13:15 version -- app/version.sh@22 -- # version=25.1 00:05:28.776 12:13:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:28.776 12:13:15 version -- app/version.sh@28 -- # version=25.1rc0 00:05:28.776 12:13:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:28.776 12:13:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:28.776 12:13:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:28.776 12:13:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:28.776 00:05:28.776 real 0m0.253s 00:05:28.776 user 0m0.168s 00:05:28.776 sys 0m0.119s 00:05:28.776 ************************************ 00:05:28.776 END TEST version 00:05:28.776 ************************************ 00:05:28.776 12:13:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.776 12:13:15 version -- common/autotest_common.sh@10 -- # set +x 00:05:28.776 12:13:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:28.776 12:13:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:28.776 12:13:15 -- spdk/autotest.sh@194 -- # uname -s 00:05:28.776 12:13:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:28.776 12:13:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:28.776 12:13:15 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:28.776 12:13:15 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:28.776 12:13:15 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:29.036 12:13:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.036 12:13:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.036 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.036 ************************************ 00:05:29.036 START TEST spdk_dd 00:05:29.036 ************************************ 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:29.036 * Looking for test storage... 00:05:29.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.036 --rc genhtml_branch_coverage=1 00:05:29.036 --rc genhtml_function_coverage=1 00:05:29.036 --rc genhtml_legend=1 00:05:29.036 --rc geninfo_all_blocks=1 00:05:29.036 --rc geninfo_unexecuted_blocks=1 00:05:29.036 00:05:29.036 ' 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.036 --rc genhtml_branch_coverage=1 00:05:29.036 --rc genhtml_function_coverage=1 00:05:29.036 --rc genhtml_legend=1 00:05:29.036 --rc geninfo_all_blocks=1 00:05:29.036 --rc geninfo_unexecuted_blocks=1 00:05:29.036 00:05:29.036 ' 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.036 --rc genhtml_branch_coverage=1 00:05:29.036 --rc genhtml_function_coverage=1 00:05:29.036 --rc genhtml_legend=1 00:05:29.036 --rc geninfo_all_blocks=1 00:05:29.036 --rc geninfo_unexecuted_blocks=1 00:05:29.036 00:05:29.036 ' 00:05:29.036 12:13:15 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.036 --rc genhtml_branch_coverage=1 00:05:29.036 --rc genhtml_function_coverage=1 00:05:29.036 --rc genhtml_legend=1 00:05:29.036 --rc geninfo_all_blocks=1 00:05:29.036 --rc geninfo_unexecuted_blocks=1 00:05:29.036 00:05:29.036 ' 00:05:29.036 12:13:15 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.036 12:13:15 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.036 12:13:15 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.036 12:13:15 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.036 12:13:15 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.036 12:13:15 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:29.036 12:13:15 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.036 12:13:15 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.555 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.555 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.555 12:13:16 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:29.555 12:13:16 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:29.555 12:13:16 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:29.556 12:13:16 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:29.556 12:13:16 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:29.556 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:29.557 * spdk_dd linked to liburing 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:29.557 12:13:16 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:29.557 12:13:16 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:29.558 12:13:16 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:29.558 12:13:16 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:29.558 12:13:16 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:29.558 12:13:16 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:29.558 12:13:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.558 12:13:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.558 12:13:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:29.558 ************************************ 00:05:29.558 START TEST spdk_dd_basic_rw 00:05:29.558 ************************************ 00:05:29.558 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:29.817 * Looking for test storage... 00:05:29.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.817 --rc genhtml_branch_coverage=1 00:05:29.817 --rc genhtml_function_coverage=1 00:05:29.817 --rc genhtml_legend=1 00:05:29.817 --rc geninfo_all_blocks=1 00:05:29.817 --rc geninfo_unexecuted_blocks=1 00:05:29.817 00:05:29.817 ' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.817 --rc genhtml_branch_coverage=1 00:05:29.817 --rc genhtml_function_coverage=1 00:05:29.817 --rc genhtml_legend=1 00:05:29.817 --rc geninfo_all_blocks=1 00:05:29.817 --rc geninfo_unexecuted_blocks=1 00:05:29.817 00:05:29.817 ' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.817 --rc genhtml_branch_coverage=1 00:05:29.817 --rc genhtml_function_coverage=1 00:05:29.817 --rc genhtml_legend=1 00:05:29.817 --rc geninfo_all_blocks=1 00:05:29.817 --rc geninfo_unexecuted_blocks=1 00:05:29.817 00:05:29.817 ' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.817 --rc genhtml_branch_coverage=1 00:05:29.817 --rc genhtml_function_coverage=1 00:05:29.817 --rc genhtml_legend=1 00:05:29.817 --rc geninfo_all_blocks=1 00:05:29.817 --rc geninfo_unexecuted_blocks=1 00:05:29.817 00:05:29.817 ' 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:29.817 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:30.079 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:30.079 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.080 ************************************ 00:05:30.080 START TEST dd_bs_lt_native_bs 00:05:30.080 ************************************ 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:30.080 12:13:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:30.080 { 00:05:30.080 "subsystems": [ 00:05:30.080 { 00:05:30.080 "subsystem": "bdev", 00:05:30.080 "config": [ 00:05:30.080 { 00:05:30.080 "params": { 00:05:30.080 "trtype": "pcie", 00:05:30.080 "traddr": "0000:00:10.0", 00:05:30.080 "name": "Nvme0" 00:05:30.080 }, 00:05:30.080 "method": "bdev_nvme_attach_controller" 00:05:30.080 }, 00:05:30.080 { 00:05:30.080 "method": "bdev_wait_for_examine" 00:05:30.080 } 00:05:30.080 ] 00:05:30.080 } 00:05:30.080 ] 00:05:30.080 } 00:05:30.080 [2024-12-06 12:13:16.617550] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:30.080 [2024-12-06 12:13:16.617725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:05:30.338 [2024-12-06 12:13:16.756639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.338 [2024-12-06 12:13:16.783640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.338 [2024-12-06 12:13:16.809799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.338 [2024-12-06 12:13:16.897742] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:30.338 [2024-12-06 12:13:16.897813] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.338 [2024-12-06 12:13:16.967324] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:30.597 ************************************ 00:05:30.597 END TEST dd_bs_lt_native_bs 00:05:30.597 ************************************ 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.597 00:05:30.597 real 0m0.464s 00:05:30.597 user 0m0.322s 00:05:30.597 sys 0m0.097s 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.597 ************************************ 00:05:30.597 START TEST dd_rw 00:05:30.597 ************************************ 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:30.597 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.163 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:31.163 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:31.163 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:31.163 12:13:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.163 [2024-12-06 12:13:17.768840] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:31.163 [2024-12-06 12:13:17.769102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59489 ] 00:05:31.163 { 00:05:31.163 "subsystems": [ 00:05:31.163 { 00:05:31.163 "subsystem": "bdev", 00:05:31.163 "config": [ 00:05:31.163 { 00:05:31.163 "params": { 00:05:31.163 "trtype": "pcie", 00:05:31.163 "traddr": "0000:00:10.0", 00:05:31.163 "name": "Nvme0" 00:05:31.163 }, 00:05:31.163 "method": "bdev_nvme_attach_controller" 00:05:31.163 }, 00:05:31.163 { 00:05:31.163 "method": "bdev_wait_for_examine" 00:05:31.163 } 00:05:31.163 ] 00:05:31.163 } 00:05:31.163 ] 00:05:31.163 } 00:05:31.421 [2024-12-06 12:13:17.920897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.421 [2024-12-06 12:13:17.961161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.421 [2024-12-06 12:13:17.997654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.679  [2024-12-06T12:13:18.337Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:31.679 00:05:31.679 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:31.679 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:31.679 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:31.679 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.679 [2024-12-06 12:13:18.268427] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:31.679 [2024-12-06 12:13:18.268971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:05:31.679 { 00:05:31.679 "subsystems": [ 00:05:31.679 { 00:05:31.679 "subsystem": "bdev", 00:05:31.679 "config": [ 00:05:31.679 { 00:05:31.679 "params": { 00:05:31.679 "trtype": "pcie", 00:05:31.679 "traddr": "0000:00:10.0", 00:05:31.679 "name": "Nvme0" 00:05:31.680 }, 00:05:31.680 "method": "bdev_nvme_attach_controller" 00:05:31.680 }, 00:05:31.680 { 00:05:31.680 "method": "bdev_wait_for_examine" 00:05:31.680 } 00:05:31.680 ] 00:05:31.680 } 00:05:31.680 ] 00:05:31.680 } 00:05:31.938 [2024-12-06 12:13:18.412209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.938 [2024-12-06 12:13:18.442012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.938 [2024-12-06 12:13:18.473273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.938  [2024-12-06T12:13:18.854Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:32.196 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:32.196 12:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.196 { 00:05:32.196 "subsystems": [ 00:05:32.196 { 00:05:32.196 "subsystem": "bdev", 00:05:32.196 "config": [ 00:05:32.196 { 00:05:32.196 "params": { 00:05:32.196 "trtype": "pcie", 00:05:32.196 "traddr": "0000:00:10.0", 00:05:32.196 "name": "Nvme0" 00:05:32.196 }, 00:05:32.196 "method": "bdev_nvme_attach_controller" 00:05:32.196 }, 00:05:32.196 { 00:05:32.196 "method": "bdev_wait_for_examine" 00:05:32.196 } 00:05:32.196 ] 00:05:32.196 } 00:05:32.196 ] 00:05:32.196 } 00:05:32.196 [2024-12-06 12:13:18.744457] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:32.196 [2024-12-06 12:13:18.744562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:05:32.454 [2024-12-06 12:13:18.888350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.454 [2024-12-06 12:13:18.916384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.454 [2024-12-06 12:13:18.944103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.454  [2024-12-06T12:13:19.371Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:32.713 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:32.713 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.296 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:33.297 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:33.297 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:33.297 12:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.297 [2024-12-06 12:13:19.781144] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:33.297 [2024-12-06 12:13:19.781262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59537 ] 00:05:33.297 { 00:05:33.297 "subsystems": [ 00:05:33.297 { 00:05:33.297 "subsystem": "bdev", 00:05:33.297 "config": [ 00:05:33.297 { 00:05:33.297 "params": { 00:05:33.297 "trtype": "pcie", 00:05:33.297 "traddr": "0000:00:10.0", 00:05:33.297 "name": "Nvme0" 00:05:33.297 }, 00:05:33.297 "method": "bdev_nvme_attach_controller" 00:05:33.297 }, 00:05:33.297 { 00:05:33.297 "method": "bdev_wait_for_examine" 00:05:33.297 } 00:05:33.297 ] 00:05:33.297 } 00:05:33.297 ] 00:05:33.297 } 00:05:33.297 [2024-12-06 12:13:19.925270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.297 [2024-12-06 12:13:19.953801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.556 [2024-12-06 12:13:19.982947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.556  [2024-12-06T12:13:20.214Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:33.556 00:05:33.556 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:33.556 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:33.556 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:33.556 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.815 [2024-12-06 12:13:20.246306] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:33.815 [2024-12-06 12:13:20.246400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:05:33.815 { 00:05:33.815 "subsystems": [ 00:05:33.815 { 00:05:33.815 "subsystem": "bdev", 00:05:33.815 "config": [ 00:05:33.815 { 00:05:33.815 "params": { 00:05:33.815 "trtype": "pcie", 00:05:33.815 "traddr": "0000:00:10.0", 00:05:33.815 "name": "Nvme0" 00:05:33.815 }, 00:05:33.815 "method": "bdev_nvme_attach_controller" 00:05:33.815 }, 00:05:33.815 { 00:05:33.815 "method": "bdev_wait_for_examine" 00:05:33.815 } 00:05:33.815 ] 00:05:33.815 } 00:05:33.815 ] 00:05:33.815 } 00:05:33.815 [2024-12-06 12:13:20.389539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.815 [2024-12-06 12:13:20.417511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.815 [2024-12-06 12:13:20.445851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.075  [2024-12-06T12:13:20.733Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:34.075 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:34.075 12:13:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.075 { 00:05:34.075 "subsystems": [ 00:05:34.075 { 00:05:34.075 "subsystem": "bdev", 00:05:34.075 "config": [ 00:05:34.075 { 00:05:34.075 "params": { 00:05:34.075 "trtype": "pcie", 00:05:34.075 "traddr": "0000:00:10.0", 00:05:34.075 "name": "Nvme0" 00:05:34.075 }, 00:05:34.075 "method": "bdev_nvme_attach_controller" 00:05:34.075 }, 00:05:34.075 { 00:05:34.075 "method": "bdev_wait_for_examine" 00:05:34.075 } 00:05:34.075 ] 00:05:34.075 } 00:05:34.075 ] 00:05:34.075 } 00:05:34.075 [2024-12-06 12:13:20.711617] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:34.075 [2024-12-06 12:13:20.711712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59566 ] 00:05:34.334 [2024-12-06 12:13:20.856550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.334 [2024-12-06 12:13:20.883149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.334 [2024-12-06 12:13:20.909385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.594  [2024-12-06T12:13:21.252Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:34.594 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:34.594 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.162 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:35.162 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:35.162 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:35.162 12:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.162 [2024-12-06 12:13:21.679136] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:35.162 [2024-12-06 12:13:21.679263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59584 ] 00:05:35.162 { 00:05:35.162 "subsystems": [ 00:05:35.162 { 00:05:35.162 "subsystem": "bdev", 00:05:35.162 "config": [ 00:05:35.162 { 00:05:35.162 "params": { 00:05:35.162 "trtype": "pcie", 00:05:35.162 "traddr": "0000:00:10.0", 00:05:35.162 "name": "Nvme0" 00:05:35.162 }, 00:05:35.162 "method": "bdev_nvme_attach_controller" 00:05:35.162 }, 00:05:35.162 { 00:05:35.162 "method": "bdev_wait_for_examine" 00:05:35.162 } 00:05:35.162 ] 00:05:35.162 } 00:05:35.162 ] 00:05:35.162 } 00:05:35.162 [2024-12-06 12:13:21.817750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.421 [2024-12-06 12:13:21.845789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.421 [2024-12-06 12:13:21.875085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.421  [2024-12-06T12:13:22.079Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:35.421 00:05:35.680 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:35.680 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:35.680 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:35.680 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 [2024-12-06 12:13:22.136704] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:35.680 [2024-12-06 12:13:22.136802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59593 ] 00:05:35.680 { 00:05:35.680 "subsystems": [ 00:05:35.680 { 00:05:35.680 "subsystem": "bdev", 00:05:35.680 "config": [ 00:05:35.680 { 00:05:35.680 "params": { 00:05:35.680 "trtype": "pcie", 00:05:35.680 "traddr": "0000:00:10.0", 00:05:35.681 "name": "Nvme0" 00:05:35.681 }, 00:05:35.681 "method": "bdev_nvme_attach_controller" 00:05:35.681 }, 00:05:35.681 { 00:05:35.681 "method": "bdev_wait_for_examine" 00:05:35.681 } 00:05:35.681 ] 00:05:35.681 } 00:05:35.681 ] 00:05:35.681 } 00:05:35.681 [2024-12-06 12:13:22.281423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.681 [2024-12-06 12:13:22.310082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.940 [2024-12-06 12:13:22.339910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.940  [2024-12-06T12:13:22.598Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:35.940 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:35.940 12:13:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.940 { 00:05:35.940 "subsystems": [ 00:05:35.940 { 00:05:35.940 "subsystem": "bdev", 00:05:35.940 "config": [ 00:05:35.940 { 00:05:35.940 "params": { 00:05:35.940 "trtype": "pcie", 00:05:35.940 "traddr": "0000:00:10.0", 00:05:35.940 "name": "Nvme0" 00:05:35.940 }, 00:05:35.940 "method": "bdev_nvme_attach_controller" 00:05:35.940 }, 00:05:35.940 { 00:05:35.940 "method": "bdev_wait_for_examine" 00:05:35.940 } 00:05:35.940 ] 00:05:35.940 } 00:05:35.940 ] 00:05:35.940 } 00:05:36.199 [2024-12-06 12:13:22.604854] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:36.199 [2024-12-06 12:13:22.604954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:05:36.199 [2024-12-06 12:13:22.749136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.199 [2024-12-06 12:13:22.778029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.199 [2024-12-06 12:13:22.807764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.459  [2024-12-06T12:13:23.117Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:36.459 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:36.459 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.027 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:37.027 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:37.027 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.027 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.027 [2024-12-06 12:13:23.510324] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:37.027 [2024-12-06 12:13:23.510402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59627 ] 00:05:37.027 { 00:05:37.027 "subsystems": [ 00:05:37.027 { 00:05:37.027 "subsystem": "bdev", 00:05:37.027 "config": [ 00:05:37.027 { 00:05:37.027 "params": { 00:05:37.027 "trtype": "pcie", 00:05:37.027 "traddr": "0000:00:10.0", 00:05:37.027 "name": "Nvme0" 00:05:37.027 }, 00:05:37.027 "method": "bdev_nvme_attach_controller" 00:05:37.027 }, 00:05:37.027 { 00:05:37.027 "method": "bdev_wait_for_examine" 00:05:37.027 } 00:05:37.027 ] 00:05:37.027 } 00:05:37.027 ] 00:05:37.027 } 00:05:37.027 [2024-12-06 12:13:23.644712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.027 [2024-12-06 12:13:23.672404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.286 [2024-12-06 12:13:23.702865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.286  [2024-12-06T12:13:23.944Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:37.286 00:05:37.286 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:37.286 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:37.286 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.286 12:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.545 [2024-12-06 12:13:23.951434] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:37.545 [2024-12-06 12:13:23.951532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59641 ] 00:05:37.545 { 00:05:37.545 "subsystems": [ 00:05:37.545 { 00:05:37.545 "subsystem": "bdev", 00:05:37.545 "config": [ 00:05:37.545 { 00:05:37.545 "params": { 00:05:37.545 "trtype": "pcie", 00:05:37.545 "traddr": "0000:00:10.0", 00:05:37.545 "name": "Nvme0" 00:05:37.545 }, 00:05:37.545 "method": "bdev_nvme_attach_controller" 00:05:37.545 }, 00:05:37.545 { 00:05:37.545 "method": "bdev_wait_for_examine" 00:05:37.545 } 00:05:37.545 ] 00:05:37.545 } 00:05:37.545 ] 00:05:37.545 } 00:05:37.545 [2024-12-06 12:13:24.087213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.545 [2024-12-06 12:13:24.114051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.545 [2024-12-06 12:13:24.140806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.805  [2024-12-06T12:13:24.463Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:37.805 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.805 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.805 [2024-12-06 12:13:24.393164] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:37.805 [2024-12-06 12:13:24.393300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59656 ] 00:05:37.805 { 00:05:37.805 "subsystems": [ 00:05:37.805 { 00:05:37.805 "subsystem": "bdev", 00:05:37.805 "config": [ 00:05:37.805 { 00:05:37.805 "params": { 00:05:37.805 "trtype": "pcie", 00:05:37.805 "traddr": "0000:00:10.0", 00:05:37.805 "name": "Nvme0" 00:05:37.805 }, 00:05:37.805 "method": "bdev_nvme_attach_controller" 00:05:37.805 }, 00:05:37.805 { 00:05:37.805 "method": "bdev_wait_for_examine" 00:05:37.805 } 00:05:37.805 ] 00:05:37.805 } 00:05:37.805 ] 00:05:37.805 } 00:05:38.064 [2024-12-06 12:13:24.528581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.064 [2024-12-06 12:13:24.556778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.064 [2024-12-06 12:13:24.583927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.064  [2024-12-06T12:13:24.982Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:38.324 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:38.324 12:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.594 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:38.594 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:38.594 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.594 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.858 { 00:05:38.858 "subsystems": [ 00:05:38.858 { 00:05:38.858 "subsystem": "bdev", 00:05:38.858 "config": [ 00:05:38.858 { 00:05:38.858 "params": { 00:05:38.858 "trtype": "pcie", 00:05:38.858 "traddr": "0000:00:10.0", 00:05:38.858 "name": "Nvme0" 00:05:38.859 }, 00:05:38.859 "method": "bdev_nvme_attach_controller" 00:05:38.859 }, 00:05:38.859 { 00:05:38.859 "method": "bdev_wait_for_examine" 00:05:38.859 } 00:05:38.859 ] 00:05:38.859 } 00:05:38.859 ] 00:05:38.859 } 00:05:38.859 [2024-12-06 12:13:25.302573] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:38.859 [2024-12-06 12:13:25.302664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59670 ] 00:05:38.859 [2024-12-06 12:13:25.444042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.859 [2024-12-06 12:13:25.471433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.859 [2024-12-06 12:13:25.498551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.116  [2024-12-06T12:13:25.774Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:39.116 00:05:39.116 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:39.116 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:39.116 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:39.116 12:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.116 { 00:05:39.117 "subsystems": [ 00:05:39.117 { 00:05:39.117 "subsystem": "bdev", 00:05:39.117 "config": [ 00:05:39.117 { 00:05:39.117 "params": { 00:05:39.117 "trtype": "pcie", 00:05:39.117 "traddr": "0000:00:10.0", 00:05:39.117 "name": "Nvme0" 00:05:39.117 }, 00:05:39.117 "method": "bdev_nvme_attach_controller" 00:05:39.117 }, 00:05:39.117 { 00:05:39.117 "method": "bdev_wait_for_examine" 00:05:39.117 } 00:05:39.117 ] 00:05:39.117 } 00:05:39.117 ] 00:05:39.117 } 00:05:39.117 [2024-12-06 12:13:25.758663] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:39.117 [2024-12-06 12:13:25.758763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59683 ] 00:05:39.376 [2024-12-06 12:13:25.901231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.376 [2024-12-06 12:13:25.928548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.376 [2024-12-06 12:13:25.956116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.636  [2024-12-06T12:13:26.294Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:39.636 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:39.636 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:39.637 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:39.637 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:39.637 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:39.637 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.637 [2024-12-06 12:13:26.220576] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:39.637 [2024-12-06 12:13:26.220666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:05:39.637 { 00:05:39.637 "subsystems": [ 00:05:39.637 { 00:05:39.637 "subsystem": "bdev", 00:05:39.637 "config": [ 00:05:39.637 { 00:05:39.637 "params": { 00:05:39.637 "trtype": "pcie", 00:05:39.637 "traddr": "0000:00:10.0", 00:05:39.637 "name": "Nvme0" 00:05:39.637 }, 00:05:39.637 "method": "bdev_nvme_attach_controller" 00:05:39.637 }, 00:05:39.637 { 00:05:39.637 "method": "bdev_wait_for_examine" 00:05:39.637 } 00:05:39.637 ] 00:05:39.637 } 00:05:39.637 ] 00:05:39.637 } 00:05:39.896 [2024-12-06 12:13:26.359514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.896 [2024-12-06 12:13:26.387181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.896 [2024-12-06 12:13:26.413861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.896  [2024-12-06T12:13:26.813Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:40.155 00:05:40.155 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:40.155 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:40.155 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:40.155 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:40.156 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:40.156 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:40.156 12:13:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.725 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:40.725 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:40.725 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.725 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.725 [2024-12-06 12:13:27.131681] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:40.725 [2024-12-06 12:13:27.131774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:05:40.725 { 00:05:40.725 "subsystems": [ 00:05:40.725 { 00:05:40.725 "subsystem": "bdev", 00:05:40.725 "config": [ 00:05:40.725 { 00:05:40.725 "params": { 00:05:40.725 "trtype": "pcie", 00:05:40.725 "traddr": "0000:00:10.0", 00:05:40.725 "name": "Nvme0" 00:05:40.725 }, 00:05:40.725 "method": "bdev_nvme_attach_controller" 00:05:40.725 }, 00:05:40.725 { 00:05:40.725 "method": "bdev_wait_for_examine" 00:05:40.725 } 00:05:40.725 ] 00:05:40.725 } 00:05:40.725 ] 00:05:40.725 } 00:05:40.725 [2024-12-06 12:13:27.276340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.725 [2024-12-06 12:13:27.303469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.725 [2024-12-06 12:13:27.330513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.985  [2024-12-06T12:13:27.643Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:40.985 00:05:40.985 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:40.985 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:40.985 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.985 12:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 { 00:05:40.985 "subsystems": [ 00:05:40.985 { 00:05:40.985 "subsystem": "bdev", 00:05:40.985 "config": [ 00:05:40.985 { 00:05:40.985 "params": { 00:05:40.985 "trtype": "pcie", 00:05:40.985 "traddr": "0000:00:10.0", 00:05:40.985 "name": "Nvme0" 00:05:40.985 }, 00:05:40.985 "method": "bdev_nvme_attach_controller" 00:05:40.985 }, 00:05:40.985 { 00:05:40.985 "method": "bdev_wait_for_examine" 00:05:40.985 } 00:05:40.985 ] 00:05:40.985 } 00:05:40.985 ] 00:05:40.985 } 00:05:40.985 [2024-12-06 12:13:27.589308] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:40.985 [2024-12-06 12:13:27.589413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:05:41.244 [2024-12-06 12:13:27.731395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.244 [2024-12-06 12:13:27.759936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.244 [2024-12-06 12:13:27.792622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.244  [2024-12-06T12:13:28.161Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:41.503 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.503 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.503 { 00:05:41.503 "subsystems": [ 00:05:41.503 { 00:05:41.503 "subsystem": "bdev", 00:05:41.503 "config": [ 00:05:41.503 { 00:05:41.503 "params": { 00:05:41.503 "trtype": "pcie", 00:05:41.503 "traddr": "0000:00:10.0", 00:05:41.503 "name": "Nvme0" 00:05:41.503 }, 00:05:41.503 "method": "bdev_nvme_attach_controller" 00:05:41.503 }, 00:05:41.503 { 00:05:41.503 "method": "bdev_wait_for_examine" 00:05:41.503 } 00:05:41.503 ] 00:05:41.503 } 00:05:41.503 ] 00:05:41.503 } 00:05:41.503 [2024-12-06 12:13:28.062739] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:41.503 [2024-12-06 12:13:28.062831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:05:41.762 [2024-12-06 12:13:28.204869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.762 [2024-12-06 12:13:28.231744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.762 [2024-12-06 12:13:28.258308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.762  [2024-12-06T12:13:28.680Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:42.022 00:05:42.022 00:05:42.022 real 0m11.386s 00:05:42.022 user 0m8.476s 00:05:42.022 sys 0m3.403s 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.022 ************************************ 00:05:42.022 END TEST dd_rw 00:05:42.022 ************************************ 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.022 ************************************ 00:05:42.022 START TEST dd_rw_offset 00:05:42.022 ************************************ 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:42.022 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:42.023 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=m6if6gy9z1u99ygdgsetfhn9dfwa00ixsf1ko6r6cjtls75cjzk625dvvae7w33vwnashq0g6cdiggj9ad7uyezvwdha94qvrl6kuo43n3lut1mw05q4rok15wzzghutsiuagp07ztfcaikdl6twp3t3pnzn89mx1zbme53v8lknjas6jvxeun8pswbzeddnbnb86bb932u4jtk321jjeefdg68zux16tui23dru3qg3mc6hjur5ct143294bl4swrqn1mytotd1em9zxouzs5zn4cfkfjeeim32pm8pxljsx47fxaqoyhthi1a7117h23v4brlhmt2qrka20fkddhfm32yc1wqazset0glq2ud7hzeymembdcoy8y0bauhhtie2efgyzuwzsrn38ezztxqa1eh5aewjgrttzgf91syxnnmkfrtacfaqbjed8fdpdv27wcbalkkpxjmens5ircj2atr8vt9m1xd73ybh0gkbyemyv7835tp72vi82pzynlz05k56g7q44p5690c9s52nss0y3ogr7vf4jkb925uq8n21c8uwjn22uhc44uyqlnxsewrwpvzb36lgpk9oud57ifwop9muwbn1tsvm69f7wcdo54wgbil8q04dqqkyholdbf7bw5sgn9uzxe3zawgpnp49r9yv5fjo4v4qytl38vqcrb5nomcull2cm8mac9njxevpuqfpcvxa371g9csbq4g11eymq1za7y63485mb02rvslbqpicejz0cgfl7ewhjfixiykq9lqtkyzdi95o48g4yt41f59xtf507hpgff4agd9ijnp1fnylvhumlvmsatr9qmjxmstruykr3f5tjb7gpxgxojqm9rildldabevpddd5sbs6jb8sno6w82z01x8waw94lhz36fomef6775jlrfujunzmrecmegi35v78fqlztemkem2jixuo2n3du8kus04fkx7g9as3k7vvu4c4ydgoqm6s4h000ul0iepunwocx7pfdj91pooxxf0kekl35gw0b1cr9ano1d3q172lfxnqjf43tsjtcm5owk8j11qfafmy2esz3wrptsfnnnqaedyv1xir8dxagnkc8ksmwow7dkm3osr4k7swtbqjrkri7xvyhssr4wejdvo241hl1cuspkthhs65k74y9o9v8inalx6u0lguhwoxiwguwjtwoa7dbrb0o91iumuvs6a2m03f3wc46fe2vo3ypco9tum4qksa2y3lrwr780jyjdltrhmbk1g6s4jp8luc2j9qlf5gij0aaxvp7jfdkvdxpajxiwilsbhvls970118uot86rn2bueh0vp3dki925q6xheoshj9g2wkzbnl8fb3zilzf816m85so8dacbmddshoo92wtzjhqdome3x516ul2pmn6kqjokhyqulwntn1krelecw3czldteeplm7f20mv3u5195qst5oyw1b5s36zofk437z1gxw7o0l7mef89mdc52muqdpmp08rchfxtkwev225q9vgxpwy3e5l8903pjrgsqi0qfbwoq73xfwmeg76wwig1bm5lbqvejlcsdmqjoqfmwwwm846nn4xn7eq8xvppdi6gqf03fbar6of2s7iy9rz7wu4b2xbfr3d820xbgosjstm69xoq1svkp9cv02zavhm1i7o3xhkqwlibj7b1uea1jwvb17am5efa3l14i55tm37rcwtd4i1bqh0z4ew3ru1q2su0eq8tkac2w0w0ia75ax7vvzilezai8xqp8w93l7ecfb9keyftphcl2nxhw38chw0vsbyhipahoqy8iwoix27e3q0dtxy87us29muxl0xohg869lm0y4pqakjmji2w0w2mtdfwyo52qhym2q09c3x45de52xk085vbky3bxb312yha1w81slignwiulh0u6y410agnrgdstihxeqv1czu6nlkxa0mgo86cbsza0w5atzo5upxf8fyey9pkz76q752vrinc2z5rlzc9diezys44cau9c1vlo3it4aay9joe1fg2kiq68bwf13rwl3ccekz7mmycp262ww7o4is7h01e2dvvt7flo4gxwgk79gelbtq0kitvurzkh54p04of81u7igapchcaldk4lf6edlpgvj89p148zq8yq6kar9bij4qstl8qa5i5if4vkonjg3v7fdprzpogrzyw63lv1h3z4fpuima9k8qgi2c4xgodlgw6mnr247lpj1cnsb7t18ty0i92yg31spf44rjdyxacx9c5u1r1yxovrk9oqovvhasc0nx2mbn7xjouu2ookvb7hwd16eu5hk0x4sch3a6rkabq6ztf8ly0s7m379hhdakvosc6xxtho3jmsz2vv4klv54tkgxni9qmu154hvabf4f3szmpthp1eni3qmyw6izhqfe5aof2exg004fix1p9xpyq5bnh778eah1yfaax7wdtwhy5jk3nqv4buu6qrlrfjl3fozq4ds3z30jcg5lji7r6x1udm7ppzqk3bp0ta5wjobyaex3xcpn065i5m4kg62hmwl7pniv6xww7m8f8m90fy4p7yr9m3kcieu4kfq4nltsvqc9p6kubny6vnt66xtanko68qmaoe2fzg7upxk2zbmpphaj7o0gno25mfyzt7xlll1b27hc0n13rd9tdg8rwn5vfnx0rx7byg32da014j4o5rxykqpslulctllqo608ardbkdb6lt6z9y26eakjrivf7tl7zca6rugaw5wwt1z2n625gn3ztdqijeumh9tuv8efb4wapgrmk66xh1sirg56m7nam904if25orqn4dwb6vr62qjce8qd0gldgwczs1v6gu766oswxg6mmrxq25c6hzr5bfun1ij7r9vst1806xtj59lhymburwj41bv8ein1e25nsykprxxgmne42ys0clnpsqthq1non8iwgjok15i500riz2kn7xbuhrcov7tsswam2zpwkwuxi6d5jpqgv90ypmdfmr8xsw99m53qhhyx1xr8lh3e4tdkbcpjwiaoywhh5vzr5f6m5vt4503vw1zysu21rb2kbjhfa02hcni4w36qrnwx10czs36mbbdkhlv3tkln85pdp7goliorlgb3qaqkpo9894aa1ucqtuypgogz5xi65lojk5kyxqgw0kprf6gqlvxzgf9h32gp72zj1dhnpwchgtgeapt4f2vkrwlw8rtezni3qi9hb5cpdduqahcvu6fc42fun1po1xy3fj4wcmyat1776t5jggjckjm1zqxgjw2s0oxuww4do7svexew789gvvgqg86qaxnseefo76z8bnzvleuttq6o8flf0xjqkz1vtp64znjpuho50bealccyokn5pxo0dmv4k29ynxv5qu47kc18hfiimojs7e37vpq4ejvch4c3snzcf3tmfy1z8zh2x8h6c2sgj9jpvhoqndg4nlhuubexcjd5sc110qr4v00u2immbtlh5s2b41o81ccu3cqqvb00fdqiqtqcpwfgor4oog2vvnlqis7gny771plp53wtxd93i5movqls6cltonqugfh5waolcru189q4qcoh2z8vwt1706327ftcfzompjlit7ugfh1zhszdw76fdo08336vy7uzvyr7vd7jzy5cwvdvunic6m3t8l64yw7wkwvxa2yr0xmrg6gkp33yq8bw4sgbj5z8dbnghj7efh0k4o16gibe1wi23qawbx5zujenfzu7rcgrfnn2ubzlqrslohca1m8nynmjtiuie9gbyh537jhdvoaw2epwj9kpie8dsndmid624xh6779tufyeb748fvxyyqhjv3gp0hlihb4m6lzeqyugj1vz8q3qx8nynsjjlsf681wspj75hstc0e2q0yyycnp7rakruc04p6dl9iqhyjtgmxhiu6mricpt0vg4zcc889jcrvfqqlrtqacxiv8kkx9msjrdqd4h44gr1256qw5l5fuorrrfaz319gpngt6cmeje0ncujpxtaq17b2goivjb4ud0z66ypzdomguy7063ega5ilzayz76pvwgsmzfeim8 00:05:42.023 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:42.023 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:42.023 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:42.023 12:13:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:42.023 [2024-12-06 12:13:28.608194] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:42.023 [2024-12-06 12:13:28.608293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59777 ] 00:05:42.023 { 00:05:42.023 "subsystems": [ 00:05:42.023 { 00:05:42.023 "subsystem": "bdev", 00:05:42.023 "config": [ 00:05:42.023 { 00:05:42.023 "params": { 00:05:42.023 "trtype": "pcie", 00:05:42.023 "traddr": "0000:00:10.0", 00:05:42.023 "name": "Nvme0" 00:05:42.023 }, 00:05:42.023 "method": "bdev_nvme_attach_controller" 00:05:42.023 }, 00:05:42.023 { 00:05:42.023 "method": "bdev_wait_for_examine" 00:05:42.023 } 00:05:42.023 ] 00:05:42.023 } 00:05:42.023 ] 00:05:42.023 } 00:05:42.283 [2024-12-06 12:13:28.744935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.283 [2024-12-06 12:13:28.772231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.283 [2024-12-06 12:13:28.799716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.283  [2024-12-06T12:13:29.201Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:42.543 00:05:42.543 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:42.543 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:42.543 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:42.543 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:42.543 { 00:05:42.543 "subsystems": [ 00:05:42.543 { 00:05:42.543 "subsystem": "bdev", 00:05:42.543 "config": [ 00:05:42.543 { 00:05:42.543 "params": { 00:05:42.543 "trtype": "pcie", 00:05:42.543 "traddr": "0000:00:10.0", 00:05:42.543 "name": "Nvme0" 00:05:42.543 }, 00:05:42.543 "method": "bdev_nvme_attach_controller" 00:05:42.543 }, 00:05:42.543 { 00:05:42.543 "method": "bdev_wait_for_examine" 00:05:42.543 } 00:05:42.543 ] 00:05:42.543 } 00:05:42.543 ] 00:05:42.543 } 00:05:42.543 [2024-12-06 12:13:29.073289] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:42.543 [2024-12-06 12:13:29.073394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59791 ] 00:05:42.802 [2024-12-06 12:13:29.215342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.802 [2024-12-06 12:13:29.243193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.802 [2024-12-06 12:13:29.271178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.802  [2024-12-06T12:13:29.720Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:43.062 00:05:43.062 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ m6if6gy9z1u99ygdgsetfhn9dfwa00ixsf1ko6r6cjtls75cjzk625dvvae7w33vwnashq0g6cdiggj9ad7uyezvwdha94qvrl6kuo43n3lut1mw05q4rok15wzzghutsiuagp07ztfcaikdl6twp3t3pnzn89mx1zbme53v8lknjas6jvxeun8pswbzeddnbnb86bb932u4jtk321jjeefdg68zux16tui23dru3qg3mc6hjur5ct143294bl4swrqn1mytotd1em9zxouzs5zn4cfkfjeeim32pm8pxljsx47fxaqoyhthi1a7117h23v4brlhmt2qrka20fkddhfm32yc1wqazset0glq2ud7hzeymembdcoy8y0bauhhtie2efgyzuwzsrn38ezztxqa1eh5aewjgrttzgf91syxnnmkfrtacfaqbjed8fdpdv27wcbalkkpxjmens5ircj2atr8vt9m1xd73ybh0gkbyemyv7835tp72vi82pzynlz05k56g7q44p5690c9s52nss0y3ogr7vf4jkb925uq8n21c8uwjn22uhc44uyqlnxsewrwpvzb36lgpk9oud57ifwop9muwbn1tsvm69f7wcdo54wgbil8q04dqqkyholdbf7bw5sgn9uzxe3zawgpnp49r9yv5fjo4v4qytl38vqcrb5nomcull2cm8mac9njxevpuqfpcvxa371g9csbq4g11eymq1za7y63485mb02rvslbqpicejz0cgfl7ewhjfixiykq9lqtkyzdi95o48g4yt41f59xtf507hpgff4agd9ijnp1fnylvhumlvmsatr9qmjxmstruykr3f5tjb7gpxgxojqm9rildldabevpddd5sbs6jb8sno6w82z01x8waw94lhz36fomef6775jlrfujunzmrecmegi35v78fqlztemkem2jixuo2n3du8kus04fkx7g9as3k7vvu4c4ydgoqm6s4h000ul0iepunwocx7pfdj91pooxxf0kekl35gw0b1cr9ano1d3q172lfxnqjf43tsjtcm5owk8j11qfafmy2esz3wrptsfnnnqaedyv1xir8dxagnkc8ksmwow7dkm3osr4k7swtbqjrkri7xvyhssr4wejdvo241hl1cuspkthhs65k74y9o9v8inalx6u0lguhwoxiwguwjtwoa7dbrb0o91iumuvs6a2m03f3wc46fe2vo3ypco9tum4qksa2y3lrwr780jyjdltrhmbk1g6s4jp8luc2j9qlf5gij0aaxvp7jfdkvdxpajxiwilsbhvls970118uot86rn2bueh0vp3dki925q6xheoshj9g2wkzbnl8fb3zilzf816m85so8dacbmddshoo92wtzjhqdome3x516ul2pmn6kqjokhyqulwntn1krelecw3czldteeplm7f20mv3u5195qst5oyw1b5s36zofk437z1gxw7o0l7mef89mdc52muqdpmp08rchfxtkwev225q9vgxpwy3e5l8903pjrgsqi0qfbwoq73xfwmeg76wwig1bm5lbqvejlcsdmqjoqfmwwwm846nn4xn7eq8xvppdi6gqf03fbar6of2s7iy9rz7wu4b2xbfr3d820xbgosjstm69xoq1svkp9cv02zavhm1i7o3xhkqwlibj7b1uea1jwvb17am5efa3l14i55tm37rcwtd4i1bqh0z4ew3ru1q2su0eq8tkac2w0w0ia75ax7vvzilezai8xqp8w93l7ecfb9keyftphcl2nxhw38chw0vsbyhipahoqy8iwoix27e3q0dtxy87us29muxl0xohg869lm0y4pqakjmji2w0w2mtdfwyo52qhym2q09c3x45de52xk085vbky3bxb312yha1w81slignwiulh0u6y410agnrgdstihxeqv1czu6nlkxa0mgo86cbsza0w5atzo5upxf8fyey9pkz76q752vrinc2z5rlzc9diezys44cau9c1vlo3it4aay9joe1fg2kiq68bwf13rwl3ccekz7mmycp262ww7o4is7h01e2dvvt7flo4gxwgk79gelbtq0kitvurzkh54p04of81u7igapchcaldk4lf6edlpgvj89p148zq8yq6kar9bij4qstl8qa5i5if4vkonjg3v7fdprzpogrzyw63lv1h3z4fpuima9k8qgi2c4xgodlgw6mnr247lpj1cnsb7t18ty0i92yg31spf44rjdyxacx9c5u1r1yxovrk9oqovvhasc0nx2mbn7xjouu2ookvb7hwd16eu5hk0x4sch3a6rkabq6ztf8ly0s7m379hhdakvosc6xxtho3jmsz2vv4klv54tkgxni9qmu154hvabf4f3szmpthp1eni3qmyw6izhqfe5aof2exg004fix1p9xpyq5bnh778eah1yfaax7wdtwhy5jk3nqv4buu6qrlrfjl3fozq4ds3z30jcg5lji7r6x1udm7ppzqk3bp0ta5wjobyaex3xcpn065i5m4kg62hmwl7pniv6xww7m8f8m90fy4p7yr9m3kcieu4kfq4nltsvqc9p6kubny6vnt66xtanko68qmaoe2fzg7upxk2zbmpphaj7o0gno25mfyzt7xlll1b27hc0n13rd9tdg8rwn5vfnx0rx7byg32da014j4o5rxykqpslulctllqo608ardbkdb6lt6z9y26eakjrivf7tl7zca6rugaw5wwt1z2n625gn3ztdqijeumh9tuv8efb4wapgrmk66xh1sirg56m7nam904if25orqn4dwb6vr62qjce8qd0gldgwczs1v6gu766oswxg6mmrxq25c6hzr5bfun1ij7r9vst1806xtj59lhymburwj41bv8ein1e25nsykprxxgmne42ys0clnpsqthq1non8iwgjok15i500riz2kn7xbuhrcov7tsswam2zpwkwuxi6d5jpqgv90ypmdfmr8xsw99m53qhhyx1xr8lh3e4tdkbcpjwiaoywhh5vzr5f6m5vt4503vw1zysu21rb2kbjhfa02hcni4w36qrnwx10czs36mbbdkhlv3tkln85pdp7goliorlgb3qaqkpo9894aa1ucqtuypgogz5xi65lojk5kyxqgw0kprf6gqlvxzgf9h32gp72zj1dhnpwchgtgeapt4f2vkrwlw8rtezni3qi9hb5cpdduqahcvu6fc42fun1po1xy3fj4wcmyat1776t5jggjckjm1zqxgjw2s0oxuww4do7svexew789gvvgqg86qaxnseefo76z8bnzvleuttq6o8flf0xjqkz1vtp64znjpuho50bealccyokn5pxo0dmv4k29ynxv5qu47kc18hfiimojs7e37vpq4ejvch4c3snzcf3tmfy1z8zh2x8h6c2sgj9jpvhoqndg4nlhuubexcjd5sc110qr4v00u2immbtlh5s2b41o81ccu3cqqvb00fdqiqtqcpwfgor4oog2vvnlqis7gny771plp53wtxd93i5movqls6cltonqugfh5waolcru189q4qcoh2z8vwt1706327ftcfzompjlit7ugfh1zhszdw76fdo08336vy7uzvyr7vd7jzy5cwvdvunic6m3t8l64yw7wkwvxa2yr0xmrg6gkp33yq8bw4sgbj5z8dbnghj7efh0k4o16gibe1wi23qawbx5zujenfzu7rcgrfnn2ubzlqrslohca1m8nynmjtiuie9gbyh537jhdvoaw2epwj9kpie8dsndmid624xh6779tufyeb748fvxyyqhjv3gp0hlihb4m6lzeqyugj1vz8q3qx8nynsjjlsf681wspj75hstc0e2q0yyycnp7rakruc04p6dl9iqhyjtgmxhiu6mricpt0vg4zcc889jcrvfqqlrtqacxiv8kkx9msjrdqd4h44gr1256qw5l5fuorrrfaz319gpngt6cmeje0ncujpxtaq17b2goivjb4ud0z66ypzdomguy7063ega5ilzayz76pvwgsmzfeim8 == \m\6\i\f\6\g\y\9\z\1\u\9\9\y\g\d\g\s\e\t\f\h\n\9\d\f\w\a\0\0\i\x\s\f\1\k\o\6\r\6\c\j\t\l\s\7\5\c\j\z\k\6\2\5\d\v\v\a\e\7\w\3\3\v\w\n\a\s\h\q\0\g\6\c\d\i\g\g\j\9\a\d\7\u\y\e\z\v\w\d\h\a\9\4\q\v\r\l\6\k\u\o\4\3\n\3\l\u\t\1\m\w\0\5\q\4\r\o\k\1\5\w\z\z\g\h\u\t\s\i\u\a\g\p\0\7\z\t\f\c\a\i\k\d\l\6\t\w\p\3\t\3\p\n\z\n\8\9\m\x\1\z\b\m\e\5\3\v\8\l\k\n\j\a\s\6\j\v\x\e\u\n\8\p\s\w\b\z\e\d\d\n\b\n\b\8\6\b\b\9\3\2\u\4\j\t\k\3\2\1\j\j\e\e\f\d\g\6\8\z\u\x\1\6\t\u\i\2\3\d\r\u\3\q\g\3\m\c\6\h\j\u\r\5\c\t\1\4\3\2\9\4\b\l\4\s\w\r\q\n\1\m\y\t\o\t\d\1\e\m\9\z\x\o\u\z\s\5\z\n\4\c\f\k\f\j\e\e\i\m\3\2\p\m\8\p\x\l\j\s\x\4\7\f\x\a\q\o\y\h\t\h\i\1\a\7\1\1\7\h\2\3\v\4\b\r\l\h\m\t\2\q\r\k\a\2\0\f\k\d\d\h\f\m\3\2\y\c\1\w\q\a\z\s\e\t\0\g\l\q\2\u\d\7\h\z\e\y\m\e\m\b\d\c\o\y\8\y\0\b\a\u\h\h\t\i\e\2\e\f\g\y\z\u\w\z\s\r\n\3\8\e\z\z\t\x\q\a\1\e\h\5\a\e\w\j\g\r\t\t\z\g\f\9\1\s\y\x\n\n\m\k\f\r\t\a\c\f\a\q\b\j\e\d\8\f\d\p\d\v\2\7\w\c\b\a\l\k\k\p\x\j\m\e\n\s\5\i\r\c\j\2\a\t\r\8\v\t\9\m\1\x\d\7\3\y\b\h\0\g\k\b\y\e\m\y\v\7\8\3\5\t\p\7\2\v\i\8\2\p\z\y\n\l\z\0\5\k\5\6\g\7\q\4\4\p\5\6\9\0\c\9\s\5\2\n\s\s\0\y\3\o\g\r\7\v\f\4\j\k\b\9\2\5\u\q\8\n\2\1\c\8\u\w\j\n\2\2\u\h\c\4\4\u\y\q\l\n\x\s\e\w\r\w\p\v\z\b\3\6\l\g\p\k\9\o\u\d\5\7\i\f\w\o\p\9\m\u\w\b\n\1\t\s\v\m\6\9\f\7\w\c\d\o\5\4\w\g\b\i\l\8\q\0\4\d\q\q\k\y\h\o\l\d\b\f\7\b\w\5\s\g\n\9\u\z\x\e\3\z\a\w\g\p\n\p\4\9\r\9\y\v\5\f\j\o\4\v\4\q\y\t\l\3\8\v\q\c\r\b\5\n\o\m\c\u\l\l\2\c\m\8\m\a\c\9\n\j\x\e\v\p\u\q\f\p\c\v\x\a\3\7\1\g\9\c\s\b\q\4\g\1\1\e\y\m\q\1\z\a\7\y\6\3\4\8\5\m\b\0\2\r\v\s\l\b\q\p\i\c\e\j\z\0\c\g\f\l\7\e\w\h\j\f\i\x\i\y\k\q\9\l\q\t\k\y\z\d\i\9\5\o\4\8\g\4\y\t\4\1\f\5\9\x\t\f\5\0\7\h\p\g\f\f\4\a\g\d\9\i\j\n\p\1\f\n\y\l\v\h\u\m\l\v\m\s\a\t\r\9\q\m\j\x\m\s\t\r\u\y\k\r\3\f\5\t\j\b\7\g\p\x\g\x\o\j\q\m\9\r\i\l\d\l\d\a\b\e\v\p\d\d\d\5\s\b\s\6\j\b\8\s\n\o\6\w\8\2\z\0\1\x\8\w\a\w\9\4\l\h\z\3\6\f\o\m\e\f\6\7\7\5\j\l\r\f\u\j\u\n\z\m\r\e\c\m\e\g\i\3\5\v\7\8\f\q\l\z\t\e\m\k\e\m\2\j\i\x\u\o\2\n\3\d\u\8\k\u\s\0\4\f\k\x\7\g\9\a\s\3\k\7\v\v\u\4\c\4\y\d\g\o\q\m\6\s\4\h\0\0\0\u\l\0\i\e\p\u\n\w\o\c\x\7\p\f\d\j\9\1\p\o\o\x\x\f\0\k\e\k\l\3\5\g\w\0\b\1\c\r\9\a\n\o\1\d\3\q\1\7\2\l\f\x\n\q\j\f\4\3\t\s\j\t\c\m\5\o\w\k\8\j\1\1\q\f\a\f\m\y\2\e\s\z\3\w\r\p\t\s\f\n\n\n\q\a\e\d\y\v\1\x\i\r\8\d\x\a\g\n\k\c\8\k\s\m\w\o\w\7\d\k\m\3\o\s\r\4\k\7\s\w\t\b\q\j\r\k\r\i\7\x\v\y\h\s\s\r\4\w\e\j\d\v\o\2\4\1\h\l\1\c\u\s\p\k\t\h\h\s\6\5\k\7\4\y\9\o\9\v\8\i\n\a\l\x\6\u\0\l\g\u\h\w\o\x\i\w\g\u\w\j\t\w\o\a\7\d\b\r\b\0\o\9\1\i\u\m\u\v\s\6\a\2\m\0\3\f\3\w\c\4\6\f\e\2\v\o\3\y\p\c\o\9\t\u\m\4\q\k\s\a\2\y\3\l\r\w\r\7\8\0\j\y\j\d\l\t\r\h\m\b\k\1\g\6\s\4\j\p\8\l\u\c\2\j\9\q\l\f\5\g\i\j\0\a\a\x\v\p\7\j\f\d\k\v\d\x\p\a\j\x\i\w\i\l\s\b\h\v\l\s\9\7\0\1\1\8\u\o\t\8\6\r\n\2\b\u\e\h\0\v\p\3\d\k\i\9\2\5\q\6\x\h\e\o\s\h\j\9\g\2\w\k\z\b\n\l\8\f\b\3\z\i\l\z\f\8\1\6\m\8\5\s\o\8\d\a\c\b\m\d\d\s\h\o\o\9\2\w\t\z\j\h\q\d\o\m\e\3\x\5\1\6\u\l\2\p\m\n\6\k\q\j\o\k\h\y\q\u\l\w\n\t\n\1\k\r\e\l\e\c\w\3\c\z\l\d\t\e\e\p\l\m\7\f\2\0\m\v\3\u\5\1\9\5\q\s\t\5\o\y\w\1\b\5\s\3\6\z\o\f\k\4\3\7\z\1\g\x\w\7\o\0\l\7\m\e\f\8\9\m\d\c\5\2\m\u\q\d\p\m\p\0\8\r\c\h\f\x\t\k\w\e\v\2\2\5\q\9\v\g\x\p\w\y\3\e\5\l\8\9\0\3\p\j\r\g\s\q\i\0\q\f\b\w\o\q\7\3\x\f\w\m\e\g\7\6\w\w\i\g\1\b\m\5\l\b\q\v\e\j\l\c\s\d\m\q\j\o\q\f\m\w\w\w\m\8\4\6\n\n\4\x\n\7\e\q\8\x\v\p\p\d\i\6\g\q\f\0\3\f\b\a\r\6\o\f\2\s\7\i\y\9\r\z\7\w\u\4\b\2\x\b\f\r\3\d\8\2\0\x\b\g\o\s\j\s\t\m\6\9\x\o\q\1\s\v\k\p\9\c\v\0\2\z\a\v\h\m\1\i\7\o\3\x\h\k\q\w\l\i\b\j\7\b\1\u\e\a\1\j\w\v\b\1\7\a\m\5\e\f\a\3\l\1\4\i\5\5\t\m\3\7\r\c\w\t\d\4\i\1\b\q\h\0\z\4\e\w\3\r\u\1\q\2\s\u\0\e\q\8\t\k\a\c\2\w\0\w\0\i\a\7\5\a\x\7\v\v\z\i\l\e\z\a\i\8\x\q\p\8\w\9\3\l\7\e\c\f\b\9\k\e\y\f\t\p\h\c\l\2\n\x\h\w\3\8\c\h\w\0\v\s\b\y\h\i\p\a\h\o\q\y\8\i\w\o\i\x\2\7\e\3\q\0\d\t\x\y\8\7\u\s\2\9\m\u\x\l\0\x\o\h\g\8\6\9\l\m\0\y\4\p\q\a\k\j\m\j\i\2\w\0\w\2\m\t\d\f\w\y\o\5\2\q\h\y\m\2\q\0\9\c\3\x\4\5\d\e\5\2\x\k\0\8\5\v\b\k\y\3\b\x\b\3\1\2\y\h\a\1\w\8\1\s\l\i\g\n\w\i\u\l\h\0\u\6\y\4\1\0\a\g\n\r\g\d\s\t\i\h\x\e\q\v\1\c\z\u\6\n\l\k\x\a\0\m\g\o\8\6\c\b\s\z\a\0\w\5\a\t\z\o\5\u\p\x\f\8\f\y\e\y\9\p\k\z\7\6\q\7\5\2\v\r\i\n\c\2\z\5\r\l\z\c\9\d\i\e\z\y\s\4\4\c\a\u\9\c\1\v\l\o\3\i\t\4\a\a\y\9\j\o\e\1\f\g\2\k\i\q\6\8\b\w\f\1\3\r\w\l\3\c\c\e\k\z\7\m\m\y\c\p\2\6\2\w\w\7\o\4\i\s\7\h\0\1\e\2\d\v\v\t\7\f\l\o\4\g\x\w\g\k\7\9\g\e\l\b\t\q\0\k\i\t\v\u\r\z\k\h\5\4\p\0\4\o\f\8\1\u\7\i\g\a\p\c\h\c\a\l\d\k\4\l\f\6\e\d\l\p\g\v\j\8\9\p\1\4\8\z\q\8\y\q\6\k\a\r\9\b\i\j\4\q\s\t\l\8\q\a\5\i\5\i\f\4\v\k\o\n\j\g\3\v\7\f\d\p\r\z\p\o\g\r\z\y\w\6\3\l\v\1\h\3\z\4\f\p\u\i\m\a\9\k\8\q\g\i\2\c\4\x\g\o\d\l\g\w\6\m\n\r\2\4\7\l\p\j\1\c\n\s\b\7\t\1\8\t\y\0\i\9\2\y\g\3\1\s\p\f\4\4\r\j\d\y\x\a\c\x\9\c\5\u\1\r\1\y\x\o\v\r\k\9\o\q\o\v\v\h\a\s\c\0\n\x\2\m\b\n\7\x\j\o\u\u\2\o\o\k\v\b\7\h\w\d\1\6\e\u\5\h\k\0\x\4\s\c\h\3\a\6\r\k\a\b\q\6\z\t\f\8\l\y\0\s\7\m\3\7\9\h\h\d\a\k\v\o\s\c\6\x\x\t\h\o\3\j\m\s\z\2\v\v\4\k\l\v\5\4\t\k\g\x\n\i\9\q\m\u\1\5\4\h\v\a\b\f\4\f\3\s\z\m\p\t\h\p\1\e\n\i\3\q\m\y\w\6\i\z\h\q\f\e\5\a\o\f\2\e\x\g\0\0\4\f\i\x\1\p\9\x\p\y\q\5\b\n\h\7\7\8\e\a\h\1\y\f\a\a\x\7\w\d\t\w\h\y\5\j\k\3\n\q\v\4\b\u\u\6\q\r\l\r\f\j\l\3\f\o\z\q\4\d\s\3\z\3\0\j\c\g\5\l\j\i\7\r\6\x\1\u\d\m\7\p\p\z\q\k\3\b\p\0\t\a\5\w\j\o\b\y\a\e\x\3\x\c\p\n\0\6\5\i\5\m\4\k\g\6\2\h\m\w\l\7\p\n\i\v\6\x\w\w\7\m\8\f\8\m\9\0\f\y\4\p\7\y\r\9\m\3\k\c\i\e\u\4\k\f\q\4\n\l\t\s\v\q\c\9\p\6\k\u\b\n\y\6\v\n\t\6\6\x\t\a\n\k\o\6\8\q\m\a\o\e\2\f\z\g\7\u\p\x\k\2\z\b\m\p\p\h\a\j\7\o\0\g\n\o\2\5\m\f\y\z\t\7\x\l\l\l\1\b\2\7\h\c\0\n\1\3\r\d\9\t\d\g\8\r\w\n\5\v\f\n\x\0\r\x\7\b\y\g\3\2\d\a\0\1\4\j\4\o\5\r\x\y\k\q\p\s\l\u\l\c\t\l\l\q\o\6\0\8\a\r\d\b\k\d\b\6\l\t\6\z\9\y\2\6\e\a\k\j\r\i\v\f\7\t\l\7\z\c\a\6\r\u\g\a\w\5\w\w\t\1\z\2\n\6\2\5\g\n\3\z\t\d\q\i\j\e\u\m\h\9\t\u\v\8\e\f\b\4\w\a\p\g\r\m\k\6\6\x\h\1\s\i\r\g\5\6\m\7\n\a\m\9\0\4\i\f\2\5\o\r\q\n\4\d\w\b\6\v\r\6\2\q\j\c\e\8\q\d\0\g\l\d\g\w\c\z\s\1\v\6\g\u\7\6\6\o\s\w\x\g\6\m\m\r\x\q\2\5\c\6\h\z\r\5\b\f\u\n\1\i\j\7\r\9\v\s\t\1\8\0\6\x\t\j\5\9\l\h\y\m\b\u\r\w\j\4\1\b\v\8\e\i\n\1\e\2\5\n\s\y\k\p\r\x\x\g\m\n\e\4\2\y\s\0\c\l\n\p\s\q\t\h\q\1\n\o\n\8\i\w\g\j\o\k\1\5\i\5\0\0\r\i\z\2\k\n\7\x\b\u\h\r\c\o\v\7\t\s\s\w\a\m\2\z\p\w\k\w\u\x\i\6\d\5\j\p\q\g\v\9\0\y\p\m\d\f\m\r\8\x\s\w\9\9\m\5\3\q\h\h\y\x\1\x\r\8\l\h\3\e\4\t\d\k\b\c\p\j\w\i\a\o\y\w\h\h\5\v\z\r\5\f\6\m\5\v\t\4\5\0\3\v\w\1\z\y\s\u\2\1\r\b\2\k\b\j\h\f\a\0\2\h\c\n\i\4\w\3\6\q\r\n\w\x\1\0\c\z\s\3\6\m\b\b\d\k\h\l\v\3\t\k\l\n\8\5\p\d\p\7\g\o\l\i\o\r\l\g\b\3\q\a\q\k\p\o\9\8\9\4\a\a\1\u\c\q\t\u\y\p\g\o\g\z\5\x\i\6\5\l\o\j\k\5\k\y\x\q\g\w\0\k\p\r\f\6\g\q\l\v\x\z\g\f\9\h\3\2\g\p\7\2\z\j\1\d\h\n\p\w\c\h\g\t\g\e\a\p\t\4\f\2\v\k\r\w\l\w\8\r\t\e\z\n\i\3\q\i\9\h\b\5\c\p\d\d\u\q\a\h\c\v\u\6\f\c\4\2\f\u\n\1\p\o\1\x\y\3\f\j\4\w\c\m\y\a\t\1\7\7\6\t\5\j\g\g\j\c\k\j\m\1\z\q\x\g\j\w\2\s\0\o\x\u\w\w\4\d\o\7\s\v\e\x\e\w\7\8\9\g\v\v\g\q\g\8\6\q\a\x\n\s\e\e\f\o\7\6\z\8\b\n\z\v\l\e\u\t\t\q\6\o\8\f\l\f\0\x\j\q\k\z\1\v\t\p\6\4\z\n\j\p\u\h\o\5\0\b\e\a\l\c\c\y\o\k\n\5\p\x\o\0\d\m\v\4\k\2\9\y\n\x\v\5\q\u\4\7\k\c\1\8\h\f\i\i\m\o\j\s\7\e\3\7\v\p\q\4\e\j\v\c\h\4\c\3\s\n\z\c\f\3\t\m\f\y\1\z\8\z\h\2\x\8\h\6\c\2\s\g\j\9\j\p\v\h\o\q\n\d\g\4\n\l\h\u\u\b\e\x\c\j\d\5\s\c\1\1\0\q\r\4\v\0\0\u\2\i\m\m\b\t\l\h\5\s\2\b\4\1\o\8\1\c\c\u\3\c\q\q\v\b\0\0\f\d\q\i\q\t\q\c\p\w\f\g\o\r\4\o\o\g\2\v\v\n\l\q\i\s\7\g\n\y\7\7\1\p\l\p\5\3\w\t\x\d\9\3\i\5\m\o\v\q\l\s\6\c\l\t\o\n\q\u\g\f\h\5\w\a\o\l\c\r\u\1\8\9\q\4\q\c\o\h\2\z\8\v\w\t\1\7\0\6\3\2\7\f\t\c\f\z\o\m\p\j\l\i\t\7\u\g\f\h\1\z\h\s\z\d\w\7\6\f\d\o\0\8\3\3\6\v\y\7\u\z\v\y\r\7\v\d\7\j\z\y\5\c\w\v\d\v\u\n\i\c\6\m\3\t\8\l\6\4\y\w\7\w\k\w\v\x\a\2\y\r\0\x\m\r\g\6\g\k\p\3\3\y\q\8\b\w\4\s\g\b\j\5\z\8\d\b\n\g\h\j\7\e\f\h\0\k\4\o\1\6\g\i\b\e\1\w\i\2\3\q\a\w\b\x\5\z\u\j\e\n\f\z\u\7\r\c\g\r\f\n\n\2\u\b\z\l\q\r\s\l\o\h\c\a\1\m\8\n\y\n\m\j\t\i\u\i\e\9\g\b\y\h\5\3\7\j\h\d\v\o\a\w\2\e\p\w\j\9\k\p\i\e\8\d\s\n\d\m\i\d\6\2\4\x\h\6\7\7\9\t\u\f\y\e\b\7\4\8\f\v\x\y\y\q\h\j\v\3\g\p\0\h\l\i\h\b\4\m\6\l\z\e\q\y\u\g\j\1\v\z\8\q\3\q\x\8\n\y\n\s\j\j\l\s\f\6\8\1\w\s\p\j\7\5\h\s\t\c\0\e\2\q\0\y\y\y\c\n\p\7\r\a\k\r\u\c\0\4\p\6\d\l\9\i\q\h\y\j\t\g\m\x\h\i\u\6\m\r\i\c\p\t\0\v\g\4\z\c\c\8\8\9\j\c\r\v\f\q\q\l\r\t\q\a\c\x\i\v\8\k\k\x\9\m\s\j\r\d\q\d\4\h\4\4\g\r\1\2\5\6\q\w\5\l\5\f\u\o\r\r\r\f\a\z\3\1\9\g\p\n\g\t\6\c\m\e\j\e\0\n\c\u\j\p\x\t\a\q\1\7\b\2\g\o\i\v\j\b\4\u\d\0\z\6\6\y\p\z\d\o\m\g\u\y\7\0\6\3\e\g\a\5\i\l\z\a\y\z\7\6\p\v\w\g\s\m\z\f\e\i\m\8 ]] 00:05:43.063 00:05:43.063 real 0m0.975s 00:05:43.063 user 0m0.676s 00:05:43.063 sys 0m0.345s 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:43.063 ************************************ 00:05:43.063 END TEST dd_rw_offset 00:05:43.063 ************************************ 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.063 12:13:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.063 { 00:05:43.063 "subsystems": [ 00:05:43.063 { 00:05:43.063 "subsystem": "bdev", 00:05:43.063 "config": [ 00:05:43.063 { 00:05:43.063 "params": { 00:05:43.063 "trtype": "pcie", 00:05:43.063 "traddr": "0000:00:10.0", 00:05:43.063 "name": "Nvme0" 00:05:43.063 }, 00:05:43.063 "method": "bdev_nvme_attach_controller" 00:05:43.063 }, 00:05:43.063 { 00:05:43.063 "method": "bdev_wait_for_examine" 00:05:43.063 } 00:05:43.063 ] 00:05:43.063 } 00:05:43.063 ] 00:05:43.063 } 00:05:43.063 [2024-12-06 12:13:29.589866] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:43.063 [2024-12-06 12:13:29.589958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:05:43.323 [2024-12-06 12:13:29.733406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.323 [2024-12-06 12:13:29.762262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.323 [2024-12-06 12:13:29.791621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.323  [2024-12-06T12:13:30.240Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:43.583 00:05:43.583 12:13:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.583 ************************************ 00:05:43.583 END TEST spdk_dd_basic_rw 00:05:43.583 ************************************ 00:05:43.583 00:05:43.583 real 0m13.846s 00:05:43.583 user 0m10.026s 00:05:43.583 sys 0m4.231s 00:05:43.583 12:13:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.583 12:13:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.583 12:13:30 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:43.583 12:13:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.583 12:13:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.583 12:13:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:43.583 ************************************ 00:05:43.583 START TEST spdk_dd_posix 00:05:43.583 ************************************ 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:43.583 * Looking for test storage... 00:05:43.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.583 --rc genhtml_branch_coverage=1 00:05:43.583 --rc genhtml_function_coverage=1 00:05:43.583 --rc genhtml_legend=1 00:05:43.583 --rc geninfo_all_blocks=1 00:05:43.583 --rc geninfo_unexecuted_blocks=1 00:05:43.583 00:05:43.583 ' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.583 --rc genhtml_branch_coverage=1 00:05:43.583 --rc genhtml_function_coverage=1 00:05:43.583 --rc genhtml_legend=1 00:05:43.583 --rc geninfo_all_blocks=1 00:05:43.583 --rc geninfo_unexecuted_blocks=1 00:05:43.583 00:05:43.583 ' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.583 --rc genhtml_branch_coverage=1 00:05:43.583 --rc genhtml_function_coverage=1 00:05:43.583 --rc genhtml_legend=1 00:05:43.583 --rc geninfo_all_blocks=1 00:05:43.583 --rc geninfo_unexecuted_blocks=1 00:05:43.583 00:05:43.583 ' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.583 --rc genhtml_branch_coverage=1 00:05:43.583 --rc genhtml_function_coverage=1 00:05:43.583 --rc genhtml_legend=1 00:05:43.583 --rc geninfo_all_blocks=1 00:05:43.583 --rc geninfo_unexecuted_blocks=1 00:05:43.583 00:05:43.583 ' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:43.583 * First test run, liburing in use 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.583 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:43.844 ************************************ 00:05:43.844 START TEST dd_flag_append 00:05:43.844 ************************************ 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hwik4bwyumwadknf9fou705o5x3kz4n4 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=qakflsgvewkh5ympbtywovr3mv8nigjt 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hwik4bwyumwadknf9fou705o5x3kz4n4 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s qakflsgvewkh5ympbtywovr3mv8nigjt 00:05:43.844 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:43.844 [2024-12-06 12:13:30.299055] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:43.844 [2024-12-06 12:13:30.299154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:05:43.844 [2024-12-06 12:13:30.429082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.844 [2024-12-06 12:13:30.456709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.844 [2024-12-06 12:13:30.482869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.157  [2024-12-06T12:13:30.815Z] Copying: 32/32 [B] (average 31 kBps) 00:05:44.157 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ qakflsgvewkh5ympbtywovr3mv8nigjthwik4bwyumwadknf9fou705o5x3kz4n4 == \q\a\k\f\l\s\g\v\e\w\k\h\5\y\m\p\b\t\y\w\o\v\r\3\m\v\8\n\i\g\j\t\h\w\i\k\4\b\w\y\u\m\w\a\d\k\n\f\9\f\o\u\7\0\5\o\5\x\3\k\z\4\n\4 ]] 00:05:44.157 00:05:44.157 real 0m0.371s 00:05:44.157 user 0m0.175s 00:05:44.157 sys 0m0.168s 00:05:44.157 ************************************ 00:05:44.157 END TEST dd_flag_append 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:44.157 ************************************ 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:44.157 ************************************ 00:05:44.157 START TEST dd_flag_directory 00:05:44.157 ************************************ 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:44.157 12:13:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:44.157 [2024-12-06 12:13:30.723191] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:44.157 [2024-12-06 12:13:30.723275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:05:44.426 [2024-12-06 12:13:30.871799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.426 [2024-12-06 12:13:30.903863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.426 [2024-12-06 12:13:30.934939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.427 [2024-12-06 12:13:30.953862] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:44.427 [2024-12-06 12:13:30.953916] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:44.427 [2024-12-06 12:13:30.953945] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.427 [2024-12-06 12:13:31.013386] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:44.427 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:44.696 [2024-12-06 12:13:31.107984] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:44.696 [2024-12-06 12:13:31.108196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59925 ] 00:05:44.696 [2024-12-06 12:13:31.244487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.696 [2024-12-06 12:13:31.271651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.696 [2024-12-06 12:13:31.298650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.696 [2024-12-06 12:13:31.317930] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:44.696 [2024-12-06 12:13:31.317980] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:44.696 [2024-12-06 12:13:31.317994] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.956 [2024-12-06 12:13:31.376384] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:44.956 ************************************ 00:05:44.956 END TEST dd_flag_directory 00:05:44.956 ************************************ 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.956 00:05:44.956 real 0m0.773s 00:05:44.956 user 0m0.378s 00:05:44.956 sys 0m0.185s 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:44.956 ************************************ 00:05:44.956 START TEST dd_flag_nofollow 00:05:44.956 ************************************ 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:44.956 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.956 [2024-12-06 12:13:31.548476] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:44.956 [2024-12-06 12:13:31.548742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59948 ] 00:05:45.216 [2024-12-06 12:13:31.692908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.216 [2024-12-06 12:13:31.723444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.216 [2024-12-06 12:13:31.754282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.216 [2024-12-06 12:13:31.772946] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:45.216 [2024-12-06 12:13:31.772999] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:45.216 [2024-12-06 12:13:31.773013] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.216 [2024-12-06 12:13:31.831518] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:45.476 12:13:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:45.476 [2024-12-06 12:13:31.939260] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:45.476 [2024-12-06 12:13:31.939350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59964 ] 00:05:45.476 [2024-12-06 12:13:32.084551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.476 [2024-12-06 12:13:32.114973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.736 [2024-12-06 12:13:32.147055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.736 [2024-12-06 12:13:32.165783] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:45.736 [2024-12-06 12:13:32.165831] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:45.736 [2024-12-06 12:13:32.165844] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.736 [2024-12-06 12:13:32.224095] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.736 [2024-12-06 12:13:32.339324] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:45.736 [2024-12-06 12:13:32.339589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59966 ] 00:05:45.994 [2024-12-06 12:13:32.479527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.994 [2024-12-06 12:13:32.507912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.994 [2024-12-06 12:13:32.536535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.994  [2024-12-06T12:13:32.912Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.254 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ sgoc6sx65t6frqsfp5qr21ioy1fb6q1cxlaqdk0vn13md70sh6uyc27ble058pu9f0h12f0tr8a3g2zk0xjsykquxf7ty1o44mwm2di9kyq7k32evvfmeauca6kbny6d8fraakr42n57qwgyrcl8v3kfgejn6s5phist25qh5ttfwhzomcjz7z2onkx74v1es8roph2q4w6j7ltxxhgagrv2jp7d2ua7a0jybrd6kdrybkdpjy7uibleth32dfj3a9ktthivfjag92rq1jgox2sdn3iohems0x1ly4pxvxkpblxrw123dyxoxkfuk8mibl9l5lgqu1ap9yskzd6055yrlu2kqwzjn4581q9rdzdg88omnbx7aug8d47miyj87yw23vbjwh1t86wtntlxvzmqk0z5ctvjzv4ufcgatx79hydtvdj3lkr6xmeia5al4doeegwy3uusnn3bed76x3jbchtwg3a5mj8p0crfbq3qm6igdzkf2sy5s7azokw9 == \s\g\o\c\6\s\x\6\5\t\6\f\r\q\s\f\p\5\q\r\2\1\i\o\y\1\f\b\6\q\1\c\x\l\a\q\d\k\0\v\n\1\3\m\d\7\0\s\h\6\u\y\c\2\7\b\l\e\0\5\8\p\u\9\f\0\h\1\2\f\0\t\r\8\a\3\g\2\z\k\0\x\j\s\y\k\q\u\x\f\7\t\y\1\o\4\4\m\w\m\2\d\i\9\k\y\q\7\k\3\2\e\v\v\f\m\e\a\u\c\a\6\k\b\n\y\6\d\8\f\r\a\a\k\r\4\2\n\5\7\q\w\g\y\r\c\l\8\v\3\k\f\g\e\j\n\6\s\5\p\h\i\s\t\2\5\q\h\5\t\t\f\w\h\z\o\m\c\j\z\7\z\2\o\n\k\x\7\4\v\1\e\s\8\r\o\p\h\2\q\4\w\6\j\7\l\t\x\x\h\g\a\g\r\v\2\j\p\7\d\2\u\a\7\a\0\j\y\b\r\d\6\k\d\r\y\b\k\d\p\j\y\7\u\i\b\l\e\t\h\3\2\d\f\j\3\a\9\k\t\t\h\i\v\f\j\a\g\9\2\r\q\1\j\g\o\x\2\s\d\n\3\i\o\h\e\m\s\0\x\1\l\y\4\p\x\v\x\k\p\b\l\x\r\w\1\2\3\d\y\x\o\x\k\f\u\k\8\m\i\b\l\9\l\5\l\g\q\u\1\a\p\9\y\s\k\z\d\6\0\5\5\y\r\l\u\2\k\q\w\z\j\n\4\5\8\1\q\9\r\d\z\d\g\8\8\o\m\n\b\x\7\a\u\g\8\d\4\7\m\i\y\j\8\7\y\w\2\3\v\b\j\w\h\1\t\8\6\w\t\n\t\l\x\v\z\m\q\k\0\z\5\c\t\v\j\z\v\4\u\f\c\g\a\t\x\7\9\h\y\d\t\v\d\j\3\l\k\r\6\x\m\e\i\a\5\a\l\4\d\o\e\e\g\w\y\3\u\u\s\n\n\3\b\e\d\7\6\x\3\j\b\c\h\t\w\g\3\a\5\m\j\8\p\0\c\r\f\b\q\3\q\m\6\i\g\d\z\k\f\2\s\y\5\s\7\a\z\o\k\w\9 ]] 00:05:46.254 00:05:46.254 real 0m1.184s 00:05:46.254 user 0m0.578s 00:05:46.254 sys 0m0.367s 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.254 ************************************ 00:05:46.254 END TEST dd_flag_nofollow 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:46.254 ************************************ 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:46.254 ************************************ 00:05:46.254 START TEST dd_flag_noatime 00:05:46.254 ************************************ 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733487212 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733487212 00:05:46.254 12:13:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:47.191 12:13:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.191 [2024-12-06 12:13:33.807212] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:47.191 [2024-12-06 12:13:33.807314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:05:47.450 [2024-12-06 12:13:33.953984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.450 [2024-12-06 12:13:33.986872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.450 [2024-12-06 12:13:34.020540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.450  [2024-12-06T12:13:34.371Z] Copying: 512/512 [B] (average 500 kBps) 00:05:47.713 00:05:47.713 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:47.713 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733487212 )) 00:05:47.713 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.713 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733487212 )) 00:05:47.713 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.713 [2024-12-06 12:13:34.217881] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:47.713 [2024-12-06 12:13:34.218140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 00:05:47.713 [2024-12-06 12:13:34.361132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.972 [2024-12-06 12:13:34.392638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.972 [2024-12-06 12:13:34.423242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.972  [2024-12-06T12:13:34.630Z] Copying: 512/512 [B] (average 500 kBps) 00:05:47.972 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733487214 )) 00:05:47.972 00:05:47.972 real 0m1.834s 00:05:47.972 user 0m0.404s 00:05:47.972 sys 0m0.358s 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:47.972 ************************************ 00:05:47.972 END TEST dd_flag_noatime 00:05:47.972 ************************************ 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:47.972 ************************************ 00:05:47.972 START TEST dd_flags_misc 00:05:47.972 ************************************ 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:47.972 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:48.231 [2024-12-06 12:13:34.663275] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:48.231 [2024-12-06 12:13:34.663336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60045 ] 00:05:48.231 [2024-12-06 12:13:34.800077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.231 [2024-12-06 12:13:34.826639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.231 [2024-12-06 12:13:34.852018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.231  [2024-12-06T12:13:35.148Z] Copying: 512/512 [B] (average 500 kBps) 00:05:48.490 00:05:48.490 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qvlpyyjc9grqg8xti9gz7d03gzfvku2dao4lnc9zddqbypxq2f8e0wp5msqcv1v6d6oc2qcf9dhgotnegdjl0fbhniyadvjxwl6xh7o5v0osawag6c6o0946xbfk9ljwthv6p3j8irgcvwsh966tz6pk1490sf51snkiu6dye00dan51qo3vvbgu6o3xrwb2ig47tpythmh5jwcgfsa3vhcmj6i4i2xg73vkcg008ncrnztad0mv5ruyb4kcz73yzdu55mexpmwiu7dg877qeea4i0usdw10cd5xzv1424x3a21tzxacbefzt8af3vvfgjd3x16l0jptdhy89wo07qo7s5a7wmjivuvj1sbahvyqqf94pt2eidh27fnqddgvz6tidlinofkqw3g3e4it3iymx0irn0o5howfsp7eni9hbp3l2le8z7tyasbag1bxra6k0fpa67szzg6luwtkw2homcabyq9y0xxg150hz3i9ebax5bl3eee3our7l5mz == \q\v\l\p\y\y\j\c\9\g\r\q\g\8\x\t\i\9\g\z\7\d\0\3\g\z\f\v\k\u\2\d\a\o\4\l\n\c\9\z\d\d\q\b\y\p\x\q\2\f\8\e\0\w\p\5\m\s\q\c\v\1\v\6\d\6\o\c\2\q\c\f\9\d\h\g\o\t\n\e\g\d\j\l\0\f\b\h\n\i\y\a\d\v\j\x\w\l\6\x\h\7\o\5\v\0\o\s\a\w\a\g\6\c\6\o\0\9\4\6\x\b\f\k\9\l\j\w\t\h\v\6\p\3\j\8\i\r\g\c\v\w\s\h\9\6\6\t\z\6\p\k\1\4\9\0\s\f\5\1\s\n\k\i\u\6\d\y\e\0\0\d\a\n\5\1\q\o\3\v\v\b\g\u\6\o\3\x\r\w\b\2\i\g\4\7\t\p\y\t\h\m\h\5\j\w\c\g\f\s\a\3\v\h\c\m\j\6\i\4\i\2\x\g\7\3\v\k\c\g\0\0\8\n\c\r\n\z\t\a\d\0\m\v\5\r\u\y\b\4\k\c\z\7\3\y\z\d\u\5\5\m\e\x\p\m\w\i\u\7\d\g\8\7\7\q\e\e\a\4\i\0\u\s\d\w\1\0\c\d\5\x\z\v\1\4\2\4\x\3\a\2\1\t\z\x\a\c\b\e\f\z\t\8\a\f\3\v\v\f\g\j\d\3\x\1\6\l\0\j\p\t\d\h\y\8\9\w\o\0\7\q\o\7\s\5\a\7\w\m\j\i\v\u\v\j\1\s\b\a\h\v\y\q\q\f\9\4\p\t\2\e\i\d\h\2\7\f\n\q\d\d\g\v\z\6\t\i\d\l\i\n\o\f\k\q\w\3\g\3\e\4\i\t\3\i\y\m\x\0\i\r\n\0\o\5\h\o\w\f\s\p\7\e\n\i\9\h\b\p\3\l\2\l\e\8\z\7\t\y\a\s\b\a\g\1\b\x\r\a\6\k\0\f\p\a\6\7\s\z\z\g\6\l\u\w\t\k\w\2\h\o\m\c\a\b\y\q\9\y\0\x\x\g\1\5\0\h\z\3\i\9\e\b\a\x\5\b\l\3\e\e\e\3\o\u\r\7\l\5\m\z ]] 00:05:48.490 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:48.491 12:13:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:48.491 [2024-12-06 12:13:35.040660] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:48.491 [2024-12-06 12:13:35.040906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60060 ] 00:05:48.750 [2024-12-06 12:13:35.182259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.750 [2024-12-06 12:13:35.211198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.750 [2024-12-06 12:13:35.240206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.750  [2024-12-06T12:13:35.408Z] Copying: 512/512 [B] (average 500 kBps) 00:05:48.750 00:05:48.750 12:13:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qvlpyyjc9grqg8xti9gz7d03gzfvku2dao4lnc9zddqbypxq2f8e0wp5msqcv1v6d6oc2qcf9dhgotnegdjl0fbhniyadvjxwl6xh7o5v0osawag6c6o0946xbfk9ljwthv6p3j8irgcvwsh966tz6pk1490sf51snkiu6dye00dan51qo3vvbgu6o3xrwb2ig47tpythmh5jwcgfsa3vhcmj6i4i2xg73vkcg008ncrnztad0mv5ruyb4kcz73yzdu55mexpmwiu7dg877qeea4i0usdw10cd5xzv1424x3a21tzxacbefzt8af3vvfgjd3x16l0jptdhy89wo07qo7s5a7wmjivuvj1sbahvyqqf94pt2eidh27fnqddgvz6tidlinofkqw3g3e4it3iymx0irn0o5howfsp7eni9hbp3l2le8z7tyasbag1bxra6k0fpa67szzg6luwtkw2homcabyq9y0xxg150hz3i9ebax5bl3eee3our7l5mz == \q\v\l\p\y\y\j\c\9\g\r\q\g\8\x\t\i\9\g\z\7\d\0\3\g\z\f\v\k\u\2\d\a\o\4\l\n\c\9\z\d\d\q\b\y\p\x\q\2\f\8\e\0\w\p\5\m\s\q\c\v\1\v\6\d\6\o\c\2\q\c\f\9\d\h\g\o\t\n\e\g\d\j\l\0\f\b\h\n\i\y\a\d\v\j\x\w\l\6\x\h\7\o\5\v\0\o\s\a\w\a\g\6\c\6\o\0\9\4\6\x\b\f\k\9\l\j\w\t\h\v\6\p\3\j\8\i\r\g\c\v\w\s\h\9\6\6\t\z\6\p\k\1\4\9\0\s\f\5\1\s\n\k\i\u\6\d\y\e\0\0\d\a\n\5\1\q\o\3\v\v\b\g\u\6\o\3\x\r\w\b\2\i\g\4\7\t\p\y\t\h\m\h\5\j\w\c\g\f\s\a\3\v\h\c\m\j\6\i\4\i\2\x\g\7\3\v\k\c\g\0\0\8\n\c\r\n\z\t\a\d\0\m\v\5\r\u\y\b\4\k\c\z\7\3\y\z\d\u\5\5\m\e\x\p\m\w\i\u\7\d\g\8\7\7\q\e\e\a\4\i\0\u\s\d\w\1\0\c\d\5\x\z\v\1\4\2\4\x\3\a\2\1\t\z\x\a\c\b\e\f\z\t\8\a\f\3\v\v\f\g\j\d\3\x\1\6\l\0\j\p\t\d\h\y\8\9\w\o\0\7\q\o\7\s\5\a\7\w\m\j\i\v\u\v\j\1\s\b\a\h\v\y\q\q\f\9\4\p\t\2\e\i\d\h\2\7\f\n\q\d\d\g\v\z\6\t\i\d\l\i\n\o\f\k\q\w\3\g\3\e\4\i\t\3\i\y\m\x\0\i\r\n\0\o\5\h\o\w\f\s\p\7\e\n\i\9\h\b\p\3\l\2\l\e\8\z\7\t\y\a\s\b\a\g\1\b\x\r\a\6\k\0\f\p\a\6\7\s\z\z\g\6\l\u\w\t\k\w\2\h\o\m\c\a\b\y\q\9\y\0\x\x\g\1\5\0\h\z\3\i\9\e\b\a\x\5\b\l\3\e\e\e\3\o\u\r\7\l\5\m\z ]] 00:05:48.750 12:13:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:48.751 12:13:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:49.009 [2024-12-06 12:13:35.419985] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:49.009 [2024-12-06 12:13:35.420078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60064 ] 00:05:49.009 [2024-12-06 12:13:35.564593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.009 [2024-12-06 12:13:35.593328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.009 [2024-12-06 12:13:35.622279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.009  [2024-12-06T12:13:35.926Z] Copying: 512/512 [B] (average 250 kBps) 00:05:49.268 00:05:49.268 12:13:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qvlpyyjc9grqg8xti9gz7d03gzfvku2dao4lnc9zddqbypxq2f8e0wp5msqcv1v6d6oc2qcf9dhgotnegdjl0fbhniyadvjxwl6xh7o5v0osawag6c6o0946xbfk9ljwthv6p3j8irgcvwsh966tz6pk1490sf51snkiu6dye00dan51qo3vvbgu6o3xrwb2ig47tpythmh5jwcgfsa3vhcmj6i4i2xg73vkcg008ncrnztad0mv5ruyb4kcz73yzdu55mexpmwiu7dg877qeea4i0usdw10cd5xzv1424x3a21tzxacbefzt8af3vvfgjd3x16l0jptdhy89wo07qo7s5a7wmjivuvj1sbahvyqqf94pt2eidh27fnqddgvz6tidlinofkqw3g3e4it3iymx0irn0o5howfsp7eni9hbp3l2le8z7tyasbag1bxra6k0fpa67szzg6luwtkw2homcabyq9y0xxg150hz3i9ebax5bl3eee3our7l5mz == \q\v\l\p\y\y\j\c\9\g\r\q\g\8\x\t\i\9\g\z\7\d\0\3\g\z\f\v\k\u\2\d\a\o\4\l\n\c\9\z\d\d\q\b\y\p\x\q\2\f\8\e\0\w\p\5\m\s\q\c\v\1\v\6\d\6\o\c\2\q\c\f\9\d\h\g\o\t\n\e\g\d\j\l\0\f\b\h\n\i\y\a\d\v\j\x\w\l\6\x\h\7\o\5\v\0\o\s\a\w\a\g\6\c\6\o\0\9\4\6\x\b\f\k\9\l\j\w\t\h\v\6\p\3\j\8\i\r\g\c\v\w\s\h\9\6\6\t\z\6\p\k\1\4\9\0\s\f\5\1\s\n\k\i\u\6\d\y\e\0\0\d\a\n\5\1\q\o\3\v\v\b\g\u\6\o\3\x\r\w\b\2\i\g\4\7\t\p\y\t\h\m\h\5\j\w\c\g\f\s\a\3\v\h\c\m\j\6\i\4\i\2\x\g\7\3\v\k\c\g\0\0\8\n\c\r\n\z\t\a\d\0\m\v\5\r\u\y\b\4\k\c\z\7\3\y\z\d\u\5\5\m\e\x\p\m\w\i\u\7\d\g\8\7\7\q\e\e\a\4\i\0\u\s\d\w\1\0\c\d\5\x\z\v\1\4\2\4\x\3\a\2\1\t\z\x\a\c\b\e\f\z\t\8\a\f\3\v\v\f\g\j\d\3\x\1\6\l\0\j\p\t\d\h\y\8\9\w\o\0\7\q\o\7\s\5\a\7\w\m\j\i\v\u\v\j\1\s\b\a\h\v\y\q\q\f\9\4\p\t\2\e\i\d\h\2\7\f\n\q\d\d\g\v\z\6\t\i\d\l\i\n\o\f\k\q\w\3\g\3\e\4\i\t\3\i\y\m\x\0\i\r\n\0\o\5\h\o\w\f\s\p\7\e\n\i\9\h\b\p\3\l\2\l\e\8\z\7\t\y\a\s\b\a\g\1\b\x\r\a\6\k\0\f\p\a\6\7\s\z\z\g\6\l\u\w\t\k\w\2\h\o\m\c\a\b\y\q\9\y\0\x\x\g\1\5\0\h\z\3\i\9\e\b\a\x\5\b\l\3\e\e\e\3\o\u\r\7\l\5\m\z ]] 00:05:49.268 12:13:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:49.268 12:13:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:49.268 [2024-12-06 12:13:35.810603] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:49.268 [2024-12-06 12:13:35.810695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:05:49.526 [2024-12-06 12:13:35.954395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.526 [2024-12-06 12:13:35.989334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.526 [2024-12-06 12:13:36.022432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.526  [2024-12-06T12:13:36.184Z] Copying: 512/512 [B] (average 250 kBps) 00:05:49.527 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qvlpyyjc9grqg8xti9gz7d03gzfvku2dao4lnc9zddqbypxq2f8e0wp5msqcv1v6d6oc2qcf9dhgotnegdjl0fbhniyadvjxwl6xh7o5v0osawag6c6o0946xbfk9ljwthv6p3j8irgcvwsh966tz6pk1490sf51snkiu6dye00dan51qo3vvbgu6o3xrwb2ig47tpythmh5jwcgfsa3vhcmj6i4i2xg73vkcg008ncrnztad0mv5ruyb4kcz73yzdu55mexpmwiu7dg877qeea4i0usdw10cd5xzv1424x3a21tzxacbefzt8af3vvfgjd3x16l0jptdhy89wo07qo7s5a7wmjivuvj1sbahvyqqf94pt2eidh27fnqddgvz6tidlinofkqw3g3e4it3iymx0irn0o5howfsp7eni9hbp3l2le8z7tyasbag1bxra6k0fpa67szzg6luwtkw2homcabyq9y0xxg150hz3i9ebax5bl3eee3our7l5mz == \q\v\l\p\y\y\j\c\9\g\r\q\g\8\x\t\i\9\g\z\7\d\0\3\g\z\f\v\k\u\2\d\a\o\4\l\n\c\9\z\d\d\q\b\y\p\x\q\2\f\8\e\0\w\p\5\m\s\q\c\v\1\v\6\d\6\o\c\2\q\c\f\9\d\h\g\o\t\n\e\g\d\j\l\0\f\b\h\n\i\y\a\d\v\j\x\w\l\6\x\h\7\o\5\v\0\o\s\a\w\a\g\6\c\6\o\0\9\4\6\x\b\f\k\9\l\j\w\t\h\v\6\p\3\j\8\i\r\g\c\v\w\s\h\9\6\6\t\z\6\p\k\1\4\9\0\s\f\5\1\s\n\k\i\u\6\d\y\e\0\0\d\a\n\5\1\q\o\3\v\v\b\g\u\6\o\3\x\r\w\b\2\i\g\4\7\t\p\y\t\h\m\h\5\j\w\c\g\f\s\a\3\v\h\c\m\j\6\i\4\i\2\x\g\7\3\v\k\c\g\0\0\8\n\c\r\n\z\t\a\d\0\m\v\5\r\u\y\b\4\k\c\z\7\3\y\z\d\u\5\5\m\e\x\p\m\w\i\u\7\d\g\8\7\7\q\e\e\a\4\i\0\u\s\d\w\1\0\c\d\5\x\z\v\1\4\2\4\x\3\a\2\1\t\z\x\a\c\b\e\f\z\t\8\a\f\3\v\v\f\g\j\d\3\x\1\6\l\0\j\p\t\d\h\y\8\9\w\o\0\7\q\o\7\s\5\a\7\w\m\j\i\v\u\v\j\1\s\b\a\h\v\y\q\q\f\9\4\p\t\2\e\i\d\h\2\7\f\n\q\d\d\g\v\z\6\t\i\d\l\i\n\o\f\k\q\w\3\g\3\e\4\i\t\3\i\y\m\x\0\i\r\n\0\o\5\h\o\w\f\s\p\7\e\n\i\9\h\b\p\3\l\2\l\e\8\z\7\t\y\a\s\b\a\g\1\b\x\r\a\6\k\0\f\p\a\6\7\s\z\z\g\6\l\u\w\t\k\w\2\h\o\m\c\a\b\y\q\9\y\0\x\x\g\1\5\0\h\z\3\i\9\e\b\a\x\5\b\l\3\e\e\e\3\o\u\r\7\l\5\m\z ]] 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:49.527 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:49.785 [2024-12-06 12:13:36.219907] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:49.785 [2024-12-06 12:13:36.220185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60083 ] 00:05:49.785 [2024-12-06 12:13:36.362814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.785 [2024-12-06 12:13:36.391712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.785 [2024-12-06 12:13:36.421204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.785  [2024-12-06T12:13:36.700Z] Copying: 512/512 [B] (average 500 kBps) 00:05:50.043 00:05:50.043 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hnfzz1rlugy8wdavcggh0i5masr58ih7xrxuyb03b9ucggi2fkxn63rz77feis0xxg5qcchd8wa8mvx9m0atbewlvi2ejmk6qxzccrk1kmx5v53mvmsz6e87k1ew86l8l054egyv152woa5ettaibvtxbsvl3sklhr9ptnnpagu0f1jhlxt1plye6rja5s012etk1zpkxhlx5qh396sqihgu856owvgoyqu3g2jivciaphqrntzbbb7nvsg5pgynsmd8dgm3glzpkteows1tvfe95wvg0rlev8zk6gjpc2208kh1l66wz2fidi6kkzbs5v1ubqy7a9x1saluwzsxfgmpzjdgzv6tdaiaobt2wu84mh8i8nsmalxwjs8h5poxi2g20lci2omb5dc6v9x44jiwrldposz0i1c6lf072i3sv2jnhpkfh888wz4aq4421f5bzbeh6l8zmabm8lt99nmqdw0ilxmb1ksjabqhohpekoez2ifyohl9o882fz2x == \h\n\f\z\z\1\r\l\u\g\y\8\w\d\a\v\c\g\g\h\0\i\5\m\a\s\r\5\8\i\h\7\x\r\x\u\y\b\0\3\b\9\u\c\g\g\i\2\f\k\x\n\6\3\r\z\7\7\f\e\i\s\0\x\x\g\5\q\c\c\h\d\8\w\a\8\m\v\x\9\m\0\a\t\b\e\w\l\v\i\2\e\j\m\k\6\q\x\z\c\c\r\k\1\k\m\x\5\v\5\3\m\v\m\s\z\6\e\8\7\k\1\e\w\8\6\l\8\l\0\5\4\e\g\y\v\1\5\2\w\o\a\5\e\t\t\a\i\b\v\t\x\b\s\v\l\3\s\k\l\h\r\9\p\t\n\n\p\a\g\u\0\f\1\j\h\l\x\t\1\p\l\y\e\6\r\j\a\5\s\0\1\2\e\t\k\1\z\p\k\x\h\l\x\5\q\h\3\9\6\s\q\i\h\g\u\8\5\6\o\w\v\g\o\y\q\u\3\g\2\j\i\v\c\i\a\p\h\q\r\n\t\z\b\b\b\7\n\v\s\g\5\p\g\y\n\s\m\d\8\d\g\m\3\g\l\z\p\k\t\e\o\w\s\1\t\v\f\e\9\5\w\v\g\0\r\l\e\v\8\z\k\6\g\j\p\c\2\2\0\8\k\h\1\l\6\6\w\z\2\f\i\d\i\6\k\k\z\b\s\5\v\1\u\b\q\y\7\a\9\x\1\s\a\l\u\w\z\s\x\f\g\m\p\z\j\d\g\z\v\6\t\d\a\i\a\o\b\t\2\w\u\8\4\m\h\8\i\8\n\s\m\a\l\x\w\j\s\8\h\5\p\o\x\i\2\g\2\0\l\c\i\2\o\m\b\5\d\c\6\v\9\x\4\4\j\i\w\r\l\d\p\o\s\z\0\i\1\c\6\l\f\0\7\2\i\3\s\v\2\j\n\h\p\k\f\h\8\8\8\w\z\4\a\q\4\4\2\1\f\5\b\z\b\e\h\6\l\8\z\m\a\b\m\8\l\t\9\9\n\m\q\d\w\0\i\l\x\m\b\1\k\s\j\a\b\q\h\o\h\p\e\k\o\e\z\2\i\f\y\o\h\l\9\o\8\8\2\f\z\2\x ]] 00:05:50.043 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:50.043 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:50.043 [2024-12-06 12:13:36.603060] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:50.043 [2024-12-06 12:13:36.603151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:05:50.301 [2024-12-06 12:13:36.738474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.301 [2024-12-06 12:13:36.768932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.301 [2024-12-06 12:13:36.801330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.301  [2024-12-06T12:13:36.959Z] Copying: 512/512 [B] (average 500 kBps) 00:05:50.301 00:05:50.301 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hnfzz1rlugy8wdavcggh0i5masr58ih7xrxuyb03b9ucggi2fkxn63rz77feis0xxg5qcchd8wa8mvx9m0atbewlvi2ejmk6qxzccrk1kmx5v53mvmsz6e87k1ew86l8l054egyv152woa5ettaibvtxbsvl3sklhr9ptnnpagu0f1jhlxt1plye6rja5s012etk1zpkxhlx5qh396sqihgu856owvgoyqu3g2jivciaphqrntzbbb7nvsg5pgynsmd8dgm3glzpkteows1tvfe95wvg0rlev8zk6gjpc2208kh1l66wz2fidi6kkzbs5v1ubqy7a9x1saluwzsxfgmpzjdgzv6tdaiaobt2wu84mh8i8nsmalxwjs8h5poxi2g20lci2omb5dc6v9x44jiwrldposz0i1c6lf072i3sv2jnhpkfh888wz4aq4421f5bzbeh6l8zmabm8lt99nmqdw0ilxmb1ksjabqhohpekoez2ifyohl9o882fz2x == \h\n\f\z\z\1\r\l\u\g\y\8\w\d\a\v\c\g\g\h\0\i\5\m\a\s\r\5\8\i\h\7\x\r\x\u\y\b\0\3\b\9\u\c\g\g\i\2\f\k\x\n\6\3\r\z\7\7\f\e\i\s\0\x\x\g\5\q\c\c\h\d\8\w\a\8\m\v\x\9\m\0\a\t\b\e\w\l\v\i\2\e\j\m\k\6\q\x\z\c\c\r\k\1\k\m\x\5\v\5\3\m\v\m\s\z\6\e\8\7\k\1\e\w\8\6\l\8\l\0\5\4\e\g\y\v\1\5\2\w\o\a\5\e\t\t\a\i\b\v\t\x\b\s\v\l\3\s\k\l\h\r\9\p\t\n\n\p\a\g\u\0\f\1\j\h\l\x\t\1\p\l\y\e\6\r\j\a\5\s\0\1\2\e\t\k\1\z\p\k\x\h\l\x\5\q\h\3\9\6\s\q\i\h\g\u\8\5\6\o\w\v\g\o\y\q\u\3\g\2\j\i\v\c\i\a\p\h\q\r\n\t\z\b\b\b\7\n\v\s\g\5\p\g\y\n\s\m\d\8\d\g\m\3\g\l\z\p\k\t\e\o\w\s\1\t\v\f\e\9\5\w\v\g\0\r\l\e\v\8\z\k\6\g\j\p\c\2\2\0\8\k\h\1\l\6\6\w\z\2\f\i\d\i\6\k\k\z\b\s\5\v\1\u\b\q\y\7\a\9\x\1\s\a\l\u\w\z\s\x\f\g\m\p\z\j\d\g\z\v\6\t\d\a\i\a\o\b\t\2\w\u\8\4\m\h\8\i\8\n\s\m\a\l\x\w\j\s\8\h\5\p\o\x\i\2\g\2\0\l\c\i\2\o\m\b\5\d\c\6\v\9\x\4\4\j\i\w\r\l\d\p\o\s\z\0\i\1\c\6\l\f\0\7\2\i\3\s\v\2\j\n\h\p\k\f\h\8\8\8\w\z\4\a\q\4\4\2\1\f\5\b\z\b\e\h\6\l\8\z\m\a\b\m\8\l\t\9\9\n\m\q\d\w\0\i\l\x\m\b\1\k\s\j\a\b\q\h\o\h\p\e\k\o\e\z\2\i\f\y\o\h\l\9\o\8\8\2\f\z\2\x ]] 00:05:50.301 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:50.301 12:13:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:50.560 [2024-12-06 12:13:36.982722] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:50.560 [2024-12-06 12:13:36.982960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:05:50.560 [2024-12-06 12:13:37.128032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.560 [2024-12-06 12:13:37.157753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.560 [2024-12-06 12:13:37.187738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.560  [2024-12-06T12:13:37.475Z] Copying: 512/512 [B] (average 125 kBps) 00:05:50.817 00:05:50.817 12:13:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hnfzz1rlugy8wdavcggh0i5masr58ih7xrxuyb03b9ucggi2fkxn63rz77feis0xxg5qcchd8wa8mvx9m0atbewlvi2ejmk6qxzccrk1kmx5v53mvmsz6e87k1ew86l8l054egyv152woa5ettaibvtxbsvl3sklhr9ptnnpagu0f1jhlxt1plye6rja5s012etk1zpkxhlx5qh396sqihgu856owvgoyqu3g2jivciaphqrntzbbb7nvsg5pgynsmd8dgm3glzpkteows1tvfe95wvg0rlev8zk6gjpc2208kh1l66wz2fidi6kkzbs5v1ubqy7a9x1saluwzsxfgmpzjdgzv6tdaiaobt2wu84mh8i8nsmalxwjs8h5poxi2g20lci2omb5dc6v9x44jiwrldposz0i1c6lf072i3sv2jnhpkfh888wz4aq4421f5bzbeh6l8zmabm8lt99nmqdw0ilxmb1ksjabqhohpekoez2ifyohl9o882fz2x == \h\n\f\z\z\1\r\l\u\g\y\8\w\d\a\v\c\g\g\h\0\i\5\m\a\s\r\5\8\i\h\7\x\r\x\u\y\b\0\3\b\9\u\c\g\g\i\2\f\k\x\n\6\3\r\z\7\7\f\e\i\s\0\x\x\g\5\q\c\c\h\d\8\w\a\8\m\v\x\9\m\0\a\t\b\e\w\l\v\i\2\e\j\m\k\6\q\x\z\c\c\r\k\1\k\m\x\5\v\5\3\m\v\m\s\z\6\e\8\7\k\1\e\w\8\6\l\8\l\0\5\4\e\g\y\v\1\5\2\w\o\a\5\e\t\t\a\i\b\v\t\x\b\s\v\l\3\s\k\l\h\r\9\p\t\n\n\p\a\g\u\0\f\1\j\h\l\x\t\1\p\l\y\e\6\r\j\a\5\s\0\1\2\e\t\k\1\z\p\k\x\h\l\x\5\q\h\3\9\6\s\q\i\h\g\u\8\5\6\o\w\v\g\o\y\q\u\3\g\2\j\i\v\c\i\a\p\h\q\r\n\t\z\b\b\b\7\n\v\s\g\5\p\g\y\n\s\m\d\8\d\g\m\3\g\l\z\p\k\t\e\o\w\s\1\t\v\f\e\9\5\w\v\g\0\r\l\e\v\8\z\k\6\g\j\p\c\2\2\0\8\k\h\1\l\6\6\w\z\2\f\i\d\i\6\k\k\z\b\s\5\v\1\u\b\q\y\7\a\9\x\1\s\a\l\u\w\z\s\x\f\g\m\p\z\j\d\g\z\v\6\t\d\a\i\a\o\b\t\2\w\u\8\4\m\h\8\i\8\n\s\m\a\l\x\w\j\s\8\h\5\p\o\x\i\2\g\2\0\l\c\i\2\o\m\b\5\d\c\6\v\9\x\4\4\j\i\w\r\l\d\p\o\s\z\0\i\1\c\6\l\f\0\7\2\i\3\s\v\2\j\n\h\p\k\f\h\8\8\8\w\z\4\a\q\4\4\2\1\f\5\b\z\b\e\h\6\l\8\z\m\a\b\m\8\l\t\9\9\n\m\q\d\w\0\i\l\x\m\b\1\k\s\j\a\b\q\h\o\h\p\e\k\o\e\z\2\i\f\y\o\h\l\9\o\8\8\2\f\z\2\x ]] 00:05:50.817 12:13:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:50.817 12:13:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:50.817 [2024-12-06 12:13:37.376216] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:50.817 [2024-12-06 12:13:37.376308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60106 ] 00:05:51.074 [2024-12-06 12:13:37.519002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.074 [2024-12-06 12:13:37.547773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.074 [2024-12-06 12:13:37.577304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.074  [2024-12-06T12:13:37.732Z] Copying: 512/512 [B] (average 250 kBps) 00:05:51.074 00:05:51.074 ************************************ 00:05:51.074 END TEST dd_flags_misc 00:05:51.074 ************************************ 00:05:51.074 12:13:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hnfzz1rlugy8wdavcggh0i5masr58ih7xrxuyb03b9ucggi2fkxn63rz77feis0xxg5qcchd8wa8mvx9m0atbewlvi2ejmk6qxzccrk1kmx5v53mvmsz6e87k1ew86l8l054egyv152woa5ettaibvtxbsvl3sklhr9ptnnpagu0f1jhlxt1plye6rja5s012etk1zpkxhlx5qh396sqihgu856owvgoyqu3g2jivciaphqrntzbbb7nvsg5pgynsmd8dgm3glzpkteows1tvfe95wvg0rlev8zk6gjpc2208kh1l66wz2fidi6kkzbs5v1ubqy7a9x1saluwzsxfgmpzjdgzv6tdaiaobt2wu84mh8i8nsmalxwjs8h5poxi2g20lci2omb5dc6v9x44jiwrldposz0i1c6lf072i3sv2jnhpkfh888wz4aq4421f5bzbeh6l8zmabm8lt99nmqdw0ilxmb1ksjabqhohpekoez2ifyohl9o882fz2x == \h\n\f\z\z\1\r\l\u\g\y\8\w\d\a\v\c\g\g\h\0\i\5\m\a\s\r\5\8\i\h\7\x\r\x\u\y\b\0\3\b\9\u\c\g\g\i\2\f\k\x\n\6\3\r\z\7\7\f\e\i\s\0\x\x\g\5\q\c\c\h\d\8\w\a\8\m\v\x\9\m\0\a\t\b\e\w\l\v\i\2\e\j\m\k\6\q\x\z\c\c\r\k\1\k\m\x\5\v\5\3\m\v\m\s\z\6\e\8\7\k\1\e\w\8\6\l\8\l\0\5\4\e\g\y\v\1\5\2\w\o\a\5\e\t\t\a\i\b\v\t\x\b\s\v\l\3\s\k\l\h\r\9\p\t\n\n\p\a\g\u\0\f\1\j\h\l\x\t\1\p\l\y\e\6\r\j\a\5\s\0\1\2\e\t\k\1\z\p\k\x\h\l\x\5\q\h\3\9\6\s\q\i\h\g\u\8\5\6\o\w\v\g\o\y\q\u\3\g\2\j\i\v\c\i\a\p\h\q\r\n\t\z\b\b\b\7\n\v\s\g\5\p\g\y\n\s\m\d\8\d\g\m\3\g\l\z\p\k\t\e\o\w\s\1\t\v\f\e\9\5\w\v\g\0\r\l\e\v\8\z\k\6\g\j\p\c\2\2\0\8\k\h\1\l\6\6\w\z\2\f\i\d\i\6\k\k\z\b\s\5\v\1\u\b\q\y\7\a\9\x\1\s\a\l\u\w\z\s\x\f\g\m\p\z\j\d\g\z\v\6\t\d\a\i\a\o\b\t\2\w\u\8\4\m\h\8\i\8\n\s\m\a\l\x\w\j\s\8\h\5\p\o\x\i\2\g\2\0\l\c\i\2\o\m\b\5\d\c\6\v\9\x\4\4\j\i\w\r\l\d\p\o\s\z\0\i\1\c\6\l\f\0\7\2\i\3\s\v\2\j\n\h\p\k\f\h\8\8\8\w\z\4\a\q\4\4\2\1\f\5\b\z\b\e\h\6\l\8\z\m\a\b\m\8\l\t\9\9\n\m\q\d\w\0\i\l\x\m\b\1\k\s\j\a\b\q\h\o\h\p\e\k\o\e\z\2\i\f\y\o\h\l\9\o\8\8\2\f\z\2\x ]] 00:05:51.074 00:05:51.074 real 0m3.105s 00:05:51.074 user 0m1.545s 00:05:51.074 sys 0m1.317s 00:05:51.074 12:13:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.075 12:13:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:05:51.332 * Second test run, disabling liburing, forcing AIO 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:51.332 ************************************ 00:05:51.332 START TEST dd_flag_append_forced_aio 00:05:51.332 ************************************ 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=yl67e20x0n5o4ff90tpmbmar0sn9099d 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=q8np8wpy8dpx8vkc7w0y23wjf7syjzox 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s yl67e20x0n5o4ff90tpmbmar0sn9099d 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s q8np8wpy8dpx8vkc7w0y23wjf7syjzox 00:05:51.332 12:13:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:51.332 [2024-12-06 12:13:37.836819] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:51.332 [2024-12-06 12:13:37.837522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60129 ] 00:05:51.332 [2024-12-06 12:13:37.980884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.590 [2024-12-06 12:13:38.009729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.590 [2024-12-06 12:13:38.039012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.590  [2024-12-06T12:13:38.248Z] Copying: 32/32 [B] (average 31 kBps) 00:05:51.590 00:05:51.590 ************************************ 00:05:51.590 END TEST dd_flag_append_forced_aio 00:05:51.590 ************************************ 00:05:51.590 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ q8np8wpy8dpx8vkc7w0y23wjf7syjzoxyl67e20x0n5o4ff90tpmbmar0sn9099d == \q\8\n\p\8\w\p\y\8\d\p\x\8\v\k\c\7\w\0\y\2\3\w\j\f\7\s\y\j\z\o\x\y\l\6\7\e\2\0\x\0\n\5\o\4\f\f\9\0\t\p\m\b\m\a\r\0\s\n\9\0\9\9\d ]] 00:05:51.590 00:05:51.590 real 0m0.414s 00:05:51.590 user 0m0.196s 00:05:51.590 sys 0m0.098s 00:05:51.590 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.590 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:51.590 12:13:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:51.591 ************************************ 00:05:51.591 START TEST dd_flag_directory_forced_aio 00:05:51.591 ************************************ 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:51.591 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.850 [2024-12-06 12:13:38.301129] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:51.850 [2024-12-06 12:13:38.301238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60161 ] 00:05:51.850 [2024-12-06 12:13:38.444877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.850 [2024-12-06 12:13:38.474671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.850 [2024-12-06 12:13:38.504580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.109 [2024-12-06 12:13:38.523635] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:52.109 [2024-12-06 12:13:38.523928] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:52.109 [2024-12-06 12:13:38.523962] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.109 [2024-12-06 12:13:38.581624] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:52.109 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:52.109 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.109 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:52.109 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:52.109 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:52.109 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:52.110 12:13:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:52.110 [2024-12-06 12:13:38.689751] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:52.110 [2024-12-06 12:13:38.689843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60165 ] 00:05:52.369 [2024-12-06 12:13:38.828334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.369 [2024-12-06 12:13:38.858463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.369 [2024-12-06 12:13:38.891141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.369 [2024-12-06 12:13:38.909456] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:52.369 [2024-12-06 12:13:38.909507] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:52.369 [2024-12-06 12:13:38.909520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.369 [2024-12-06 12:13:38.966470] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.369 00:05:52.369 real 0m0.774s 00:05:52.369 user 0m0.387s 00:05:52.369 sys 0m0.180s 00:05:52.369 ************************************ 00:05:52.369 END TEST dd_flag_directory_forced_aio 00:05:52.369 ************************************ 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.369 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:52.629 ************************************ 00:05:52.629 START TEST dd_flag_nofollow_forced_aio 00:05:52.629 ************************************ 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:52.629 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.629 [2024-12-06 12:13:39.129256] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:52.629 [2024-12-06 12:13:39.129988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:05:52.629 [2024-12-06 12:13:39.273092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.889 [2024-12-06 12:13:39.303671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.889 [2024-12-06 12:13:39.333416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.889 [2024-12-06 12:13:39.351441] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:52.889 [2024-12-06 12:13:39.351493] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:52.889 [2024-12-06 12:13:39.351506] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.889 [2024-12-06 12:13:39.409381] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:52.889 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:52.889 [2024-12-06 12:13:39.519311] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:52.889 [2024-12-06 12:13:39.519404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60203 ] 00:05:53.149 [2024-12-06 12:13:39.654564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.149 [2024-12-06 12:13:39.683955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.149 [2024-12-06 12:13:39.713829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.149 [2024-12-06 12:13:39.732022] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:53.149 [2024-12-06 12:13:39.732073] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:53.149 [2024-12-06 12:13:39.732088] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.149 [2024-12-06 12:13:39.790538] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:53.409 12:13:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.409 [2024-12-06 12:13:39.921096] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:53.409 [2024-12-06 12:13:39.921375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:05:53.669 [2024-12-06 12:13:40.067314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.669 [2024-12-06 12:13:40.099639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.669 [2024-12-06 12:13:40.127239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.669  [2024-12-06T12:13:40.327Z] Copying: 512/512 [B] (average 500 kBps) 00:05:53.669 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ iorwuhrt6hainjus520ym0v2pdqwp4mr37grqn0melivtdwwr2opvsav4598dmbq80swn8h7f6g4dob226q5ydk2g4aoei4oe4aofq9vyg2h5ipii9eg8jw8gixtpcbhgguc1lucbgmyb649qv2plkst95sxqguvlbuuc5v1d8sgkku46rga8a3gcr3nt2xzr96s55uxayncqelcat4l80mljogxbrk9rpt02q0tilfrp2r2ttzup9a5tlt9evwi2rn8vuv8e1qbkz33giepry91mczog7z4pd3k3fz1z8ondkle4ea4pme6tis69pinntiavk1sb0pizvcdj5yccav2govgaigod19bcl74zhjjuky1qx86sq6fkufwdvq5jb44oobf91y2kxbwxwirz6gwjema8i3k9bkjuhu17hnhupmszy58tzxsixxfww6iy87t12wddvtr5xv7e25rv2e0t7e4a84hx15r5tkrqd6v4raw5j3e3qwk0g6r0hsf == \i\o\r\w\u\h\r\t\6\h\a\i\n\j\u\s\5\2\0\y\m\0\v\2\p\d\q\w\p\4\m\r\3\7\g\r\q\n\0\m\e\l\i\v\t\d\w\w\r\2\o\p\v\s\a\v\4\5\9\8\d\m\b\q\8\0\s\w\n\8\h\7\f\6\g\4\d\o\b\2\2\6\q\5\y\d\k\2\g\4\a\o\e\i\4\o\e\4\a\o\f\q\9\v\y\g\2\h\5\i\p\i\i\9\e\g\8\j\w\8\g\i\x\t\p\c\b\h\g\g\u\c\1\l\u\c\b\g\m\y\b\6\4\9\q\v\2\p\l\k\s\t\9\5\s\x\q\g\u\v\l\b\u\u\c\5\v\1\d\8\s\g\k\k\u\4\6\r\g\a\8\a\3\g\c\r\3\n\t\2\x\z\r\9\6\s\5\5\u\x\a\y\n\c\q\e\l\c\a\t\4\l\8\0\m\l\j\o\g\x\b\r\k\9\r\p\t\0\2\q\0\t\i\l\f\r\p\2\r\2\t\t\z\u\p\9\a\5\t\l\t\9\e\v\w\i\2\r\n\8\v\u\v\8\e\1\q\b\k\z\3\3\g\i\e\p\r\y\9\1\m\c\z\o\g\7\z\4\p\d\3\k\3\f\z\1\z\8\o\n\d\k\l\e\4\e\a\4\p\m\e\6\t\i\s\6\9\p\i\n\n\t\i\a\v\k\1\s\b\0\p\i\z\v\c\d\j\5\y\c\c\a\v\2\g\o\v\g\a\i\g\o\d\1\9\b\c\l\7\4\z\h\j\j\u\k\y\1\q\x\8\6\s\q\6\f\k\u\f\w\d\v\q\5\j\b\4\4\o\o\b\f\9\1\y\2\k\x\b\w\x\w\i\r\z\6\g\w\j\e\m\a\8\i\3\k\9\b\k\j\u\h\u\1\7\h\n\h\u\p\m\s\z\y\5\8\t\z\x\s\i\x\x\f\w\w\6\i\y\8\7\t\1\2\w\d\d\v\t\r\5\x\v\7\e\2\5\r\v\2\e\0\t\7\e\4\a\8\4\h\x\1\5\r\5\t\k\r\q\d\6\v\4\r\a\w\5\j\3\e\3\q\w\k\0\g\6\r\0\h\s\f ]] 00:05:53.669 00:05:53.669 real 0m1.208s 00:05:53.669 user 0m0.601s 00:05:53.669 sys 0m0.278s 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:53.669 ************************************ 00:05:53.669 END TEST dd_flag_nofollow_forced_aio 00:05:53.669 ************************************ 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:53.669 ************************************ 00:05:53.669 START TEST dd_flag_noatime_forced_aio 00:05:53.669 ************************************ 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:05:53.669 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733487220 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733487220 00:05:53.928 12:13:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:05:54.892 12:13:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.892 [2024-12-06 12:13:41.409501] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:54.892 [2024-12-06 12:13:41.409777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60251 ] 00:05:55.151 [2024-12-06 12:13:41.553995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.151 [2024-12-06 12:13:41.585788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.151 [2024-12-06 12:13:41.618279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.151  [2024-12-06T12:13:41.809Z] Copying: 512/512 [B] (average 500 kBps) 00:05:55.151 00:05:55.151 12:13:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.151 12:13:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733487220 )) 00:05:55.151 12:13:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.151 12:13:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733487220 )) 00:05:55.151 12:13:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.410 [2024-12-06 12:13:41.836023] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:55.410 [2024-12-06 12:13:41.836537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:05:55.410 [2024-12-06 12:13:41.980982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.410 [2024-12-06 12:13:42.013576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.410 [2024-12-06 12:13:42.042790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.410  [2024-12-06T12:13:42.327Z] Copying: 512/512 [B] (average 500 kBps) 00:05:55.669 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.669 ************************************ 00:05:55.669 END TEST dd_flag_noatime_forced_aio 00:05:55.669 ************************************ 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733487222 )) 00:05:55.669 00:05:55.669 real 0m1.890s 00:05:55.669 user 0m0.429s 00:05:55.669 sys 0m0.206s 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:55.669 ************************************ 00:05:55.669 START TEST dd_flags_misc_forced_aio 00:05:55.669 ************************************ 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:55.669 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:55.669 [2024-12-06 12:13:42.319825] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:55.669 [2024-12-06 12:13:42.319887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:05:55.928 [2024-12-06 12:13:42.457551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.928 [2024-12-06 12:13:42.484097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.928 [2024-12-06 12:13:42.510105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.928  [2024-12-06T12:13:42.846Z] Copying: 512/512 [B] (average 500 kBps) 00:05:56.188 00:05:56.188 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f9ks11k4ynfewvmcxw9o3ynzwmdzchdd0lz6yxjo7parm4ou286ngv2kht88nw65laocmcjwd7z7ereeojmk0kew9n02h9rnexlp5kpmlkr24o7zhy1wnlu072mw7j9yzunhgbpw7czdg4wc2r25z7lwpinakth72yq57tpugffgevch4ok1045j47f1mlrmwhrs8wgcd006a59asl4h89io3k5401o0nddcgs9za8jjk12h0fpfe5yykbjekj3hmzavzffwhawb6oj7sr3u7488b396hpbxfqiadwxkzzmd51pn8418wsrwfp1zmcotnhmpcjjj96zjlx37pp6c2lr8312li4td6vpxdgeptjhnmut2t9gz16i2h4ssfai3mjndytq6415c27skhpbawcjxvnp7uz6edn07762tgsr6p1ksu1y96d6e9viw1ws56l9c3wbjof4y321odi3e7psubuxs9x09nr9dko4kn8gwyl1m4c027gu7jeuma9ex == \f\9\k\s\1\1\k\4\y\n\f\e\w\v\m\c\x\w\9\o\3\y\n\z\w\m\d\z\c\h\d\d\0\l\z\6\y\x\j\o\7\p\a\r\m\4\o\u\2\8\6\n\g\v\2\k\h\t\8\8\n\w\6\5\l\a\o\c\m\c\j\w\d\7\z\7\e\r\e\e\o\j\m\k\0\k\e\w\9\n\0\2\h\9\r\n\e\x\l\p\5\k\p\m\l\k\r\2\4\o\7\z\h\y\1\w\n\l\u\0\7\2\m\w\7\j\9\y\z\u\n\h\g\b\p\w\7\c\z\d\g\4\w\c\2\r\2\5\z\7\l\w\p\i\n\a\k\t\h\7\2\y\q\5\7\t\p\u\g\f\f\g\e\v\c\h\4\o\k\1\0\4\5\j\4\7\f\1\m\l\r\m\w\h\r\s\8\w\g\c\d\0\0\6\a\5\9\a\s\l\4\h\8\9\i\o\3\k\5\4\0\1\o\0\n\d\d\c\g\s\9\z\a\8\j\j\k\1\2\h\0\f\p\f\e\5\y\y\k\b\j\e\k\j\3\h\m\z\a\v\z\f\f\w\h\a\w\b\6\o\j\7\s\r\3\u\7\4\8\8\b\3\9\6\h\p\b\x\f\q\i\a\d\w\x\k\z\z\m\d\5\1\p\n\8\4\1\8\w\s\r\w\f\p\1\z\m\c\o\t\n\h\m\p\c\j\j\j\9\6\z\j\l\x\3\7\p\p\6\c\2\l\r\8\3\1\2\l\i\4\t\d\6\v\p\x\d\g\e\p\t\j\h\n\m\u\t\2\t\9\g\z\1\6\i\2\h\4\s\s\f\a\i\3\m\j\n\d\y\t\q\6\4\1\5\c\2\7\s\k\h\p\b\a\w\c\j\x\v\n\p\7\u\z\6\e\d\n\0\7\7\6\2\t\g\s\r\6\p\1\k\s\u\1\y\9\6\d\6\e\9\v\i\w\1\w\s\5\6\l\9\c\3\w\b\j\o\f\4\y\3\2\1\o\d\i\3\e\7\p\s\u\b\u\x\s\9\x\0\9\n\r\9\d\k\o\4\k\n\8\g\w\y\l\1\m\4\c\0\2\7\g\u\7\j\e\u\m\a\9\e\x ]] 00:05:56.188 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:56.188 12:13:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:56.188 [2024-12-06 12:13:42.709921] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:56.188 [2024-12-06 12:13:42.710154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60292 ] 00:05:56.447 [2024-12-06 12:13:42.853332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.447 [2024-12-06 12:13:42.883831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.447 [2024-12-06 12:13:42.913694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.447  [2024-12-06T12:13:43.105Z] Copying: 512/512 [B] (average 500 kBps) 00:05:56.447 00:05:56.448 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f9ks11k4ynfewvmcxw9o3ynzwmdzchdd0lz6yxjo7parm4ou286ngv2kht88nw65laocmcjwd7z7ereeojmk0kew9n02h9rnexlp5kpmlkr24o7zhy1wnlu072mw7j9yzunhgbpw7czdg4wc2r25z7lwpinakth72yq57tpugffgevch4ok1045j47f1mlrmwhrs8wgcd006a59asl4h89io3k5401o0nddcgs9za8jjk12h0fpfe5yykbjekj3hmzavzffwhawb6oj7sr3u7488b396hpbxfqiadwxkzzmd51pn8418wsrwfp1zmcotnhmpcjjj96zjlx37pp6c2lr8312li4td6vpxdgeptjhnmut2t9gz16i2h4ssfai3mjndytq6415c27skhpbawcjxvnp7uz6edn07762tgsr6p1ksu1y96d6e9viw1ws56l9c3wbjof4y321odi3e7psubuxs9x09nr9dko4kn8gwyl1m4c027gu7jeuma9ex == \f\9\k\s\1\1\k\4\y\n\f\e\w\v\m\c\x\w\9\o\3\y\n\z\w\m\d\z\c\h\d\d\0\l\z\6\y\x\j\o\7\p\a\r\m\4\o\u\2\8\6\n\g\v\2\k\h\t\8\8\n\w\6\5\l\a\o\c\m\c\j\w\d\7\z\7\e\r\e\e\o\j\m\k\0\k\e\w\9\n\0\2\h\9\r\n\e\x\l\p\5\k\p\m\l\k\r\2\4\o\7\z\h\y\1\w\n\l\u\0\7\2\m\w\7\j\9\y\z\u\n\h\g\b\p\w\7\c\z\d\g\4\w\c\2\r\2\5\z\7\l\w\p\i\n\a\k\t\h\7\2\y\q\5\7\t\p\u\g\f\f\g\e\v\c\h\4\o\k\1\0\4\5\j\4\7\f\1\m\l\r\m\w\h\r\s\8\w\g\c\d\0\0\6\a\5\9\a\s\l\4\h\8\9\i\o\3\k\5\4\0\1\o\0\n\d\d\c\g\s\9\z\a\8\j\j\k\1\2\h\0\f\p\f\e\5\y\y\k\b\j\e\k\j\3\h\m\z\a\v\z\f\f\w\h\a\w\b\6\o\j\7\s\r\3\u\7\4\8\8\b\3\9\6\h\p\b\x\f\q\i\a\d\w\x\k\z\z\m\d\5\1\p\n\8\4\1\8\w\s\r\w\f\p\1\z\m\c\o\t\n\h\m\p\c\j\j\j\9\6\z\j\l\x\3\7\p\p\6\c\2\l\r\8\3\1\2\l\i\4\t\d\6\v\p\x\d\g\e\p\t\j\h\n\m\u\t\2\t\9\g\z\1\6\i\2\h\4\s\s\f\a\i\3\m\j\n\d\y\t\q\6\4\1\5\c\2\7\s\k\h\p\b\a\w\c\j\x\v\n\p\7\u\z\6\e\d\n\0\7\7\6\2\t\g\s\r\6\p\1\k\s\u\1\y\9\6\d\6\e\9\v\i\w\1\w\s\5\6\l\9\c\3\w\b\j\o\f\4\y\3\2\1\o\d\i\3\e\7\p\s\u\b\u\x\s\9\x\0\9\n\r\9\d\k\o\4\k\n\8\g\w\y\l\1\m\4\c\0\2\7\g\u\7\j\e\u\m\a\9\e\x ]] 00:05:56.448 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:56.448 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:56.708 [2024-12-06 12:13:43.123249] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:56.708 [2024-12-06 12:13:43.123337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60298 ] 00:05:56.708 [2024-12-06 12:13:43.267055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.708 [2024-12-06 12:13:43.295402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.708 [2024-12-06 12:13:43.323044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.708  [2024-12-06T12:13:43.626Z] Copying: 512/512 [B] (average 125 kBps) 00:05:56.968 00:05:56.968 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f9ks11k4ynfewvmcxw9o3ynzwmdzchdd0lz6yxjo7parm4ou286ngv2kht88nw65laocmcjwd7z7ereeojmk0kew9n02h9rnexlp5kpmlkr24o7zhy1wnlu072mw7j9yzunhgbpw7czdg4wc2r25z7lwpinakth72yq57tpugffgevch4ok1045j47f1mlrmwhrs8wgcd006a59asl4h89io3k5401o0nddcgs9za8jjk12h0fpfe5yykbjekj3hmzavzffwhawb6oj7sr3u7488b396hpbxfqiadwxkzzmd51pn8418wsrwfp1zmcotnhmpcjjj96zjlx37pp6c2lr8312li4td6vpxdgeptjhnmut2t9gz16i2h4ssfai3mjndytq6415c27skhpbawcjxvnp7uz6edn07762tgsr6p1ksu1y96d6e9viw1ws56l9c3wbjof4y321odi3e7psubuxs9x09nr9dko4kn8gwyl1m4c027gu7jeuma9ex == \f\9\k\s\1\1\k\4\y\n\f\e\w\v\m\c\x\w\9\o\3\y\n\z\w\m\d\z\c\h\d\d\0\l\z\6\y\x\j\o\7\p\a\r\m\4\o\u\2\8\6\n\g\v\2\k\h\t\8\8\n\w\6\5\l\a\o\c\m\c\j\w\d\7\z\7\e\r\e\e\o\j\m\k\0\k\e\w\9\n\0\2\h\9\r\n\e\x\l\p\5\k\p\m\l\k\r\2\4\o\7\z\h\y\1\w\n\l\u\0\7\2\m\w\7\j\9\y\z\u\n\h\g\b\p\w\7\c\z\d\g\4\w\c\2\r\2\5\z\7\l\w\p\i\n\a\k\t\h\7\2\y\q\5\7\t\p\u\g\f\f\g\e\v\c\h\4\o\k\1\0\4\5\j\4\7\f\1\m\l\r\m\w\h\r\s\8\w\g\c\d\0\0\6\a\5\9\a\s\l\4\h\8\9\i\o\3\k\5\4\0\1\o\0\n\d\d\c\g\s\9\z\a\8\j\j\k\1\2\h\0\f\p\f\e\5\y\y\k\b\j\e\k\j\3\h\m\z\a\v\z\f\f\w\h\a\w\b\6\o\j\7\s\r\3\u\7\4\8\8\b\3\9\6\h\p\b\x\f\q\i\a\d\w\x\k\z\z\m\d\5\1\p\n\8\4\1\8\w\s\r\w\f\p\1\z\m\c\o\t\n\h\m\p\c\j\j\j\9\6\z\j\l\x\3\7\p\p\6\c\2\l\r\8\3\1\2\l\i\4\t\d\6\v\p\x\d\g\e\p\t\j\h\n\m\u\t\2\t\9\g\z\1\6\i\2\h\4\s\s\f\a\i\3\m\j\n\d\y\t\q\6\4\1\5\c\2\7\s\k\h\p\b\a\w\c\j\x\v\n\p\7\u\z\6\e\d\n\0\7\7\6\2\t\g\s\r\6\p\1\k\s\u\1\y\9\6\d\6\e\9\v\i\w\1\w\s\5\6\l\9\c\3\w\b\j\o\f\4\y\3\2\1\o\d\i\3\e\7\p\s\u\b\u\x\s\9\x\0\9\n\r\9\d\k\o\4\k\n\8\g\w\y\l\1\m\4\c\0\2\7\g\u\7\j\e\u\m\a\9\e\x ]] 00:05:56.968 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:56.968 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:56.968 [2024-12-06 12:13:43.530756] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:56.968 [2024-12-06 12:13:43.530993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 00:05:57.227 [2024-12-06 12:13:43.665820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.227 [2024-12-06 12:13:43.694888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.227 [2024-12-06 12:13:43.724855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.227  [2024-12-06T12:13:43.885Z] Copying: 512/512 [B] (average 500 kBps) 00:05:57.227 00:05:57.227 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f9ks11k4ynfewvmcxw9o3ynzwmdzchdd0lz6yxjo7parm4ou286ngv2kht88nw65laocmcjwd7z7ereeojmk0kew9n02h9rnexlp5kpmlkr24o7zhy1wnlu072mw7j9yzunhgbpw7czdg4wc2r25z7lwpinakth72yq57tpugffgevch4ok1045j47f1mlrmwhrs8wgcd006a59asl4h89io3k5401o0nddcgs9za8jjk12h0fpfe5yykbjekj3hmzavzffwhawb6oj7sr3u7488b396hpbxfqiadwxkzzmd51pn8418wsrwfp1zmcotnhmpcjjj96zjlx37pp6c2lr8312li4td6vpxdgeptjhnmut2t9gz16i2h4ssfai3mjndytq6415c27skhpbawcjxvnp7uz6edn07762tgsr6p1ksu1y96d6e9viw1ws56l9c3wbjof4y321odi3e7psubuxs9x09nr9dko4kn8gwyl1m4c027gu7jeuma9ex == \f\9\k\s\1\1\k\4\y\n\f\e\w\v\m\c\x\w\9\o\3\y\n\z\w\m\d\z\c\h\d\d\0\l\z\6\y\x\j\o\7\p\a\r\m\4\o\u\2\8\6\n\g\v\2\k\h\t\8\8\n\w\6\5\l\a\o\c\m\c\j\w\d\7\z\7\e\r\e\e\o\j\m\k\0\k\e\w\9\n\0\2\h\9\r\n\e\x\l\p\5\k\p\m\l\k\r\2\4\o\7\z\h\y\1\w\n\l\u\0\7\2\m\w\7\j\9\y\z\u\n\h\g\b\p\w\7\c\z\d\g\4\w\c\2\r\2\5\z\7\l\w\p\i\n\a\k\t\h\7\2\y\q\5\7\t\p\u\g\f\f\g\e\v\c\h\4\o\k\1\0\4\5\j\4\7\f\1\m\l\r\m\w\h\r\s\8\w\g\c\d\0\0\6\a\5\9\a\s\l\4\h\8\9\i\o\3\k\5\4\0\1\o\0\n\d\d\c\g\s\9\z\a\8\j\j\k\1\2\h\0\f\p\f\e\5\y\y\k\b\j\e\k\j\3\h\m\z\a\v\z\f\f\w\h\a\w\b\6\o\j\7\s\r\3\u\7\4\8\8\b\3\9\6\h\p\b\x\f\q\i\a\d\w\x\k\z\z\m\d\5\1\p\n\8\4\1\8\w\s\r\w\f\p\1\z\m\c\o\t\n\h\m\p\c\j\j\j\9\6\z\j\l\x\3\7\p\p\6\c\2\l\r\8\3\1\2\l\i\4\t\d\6\v\p\x\d\g\e\p\t\j\h\n\m\u\t\2\t\9\g\z\1\6\i\2\h\4\s\s\f\a\i\3\m\j\n\d\y\t\q\6\4\1\5\c\2\7\s\k\h\p\b\a\w\c\j\x\v\n\p\7\u\z\6\e\d\n\0\7\7\6\2\t\g\s\r\6\p\1\k\s\u\1\y\9\6\d\6\e\9\v\i\w\1\w\s\5\6\l\9\c\3\w\b\j\o\f\4\y\3\2\1\o\d\i\3\e\7\p\s\u\b\u\x\s\9\x\0\9\n\r\9\d\k\o\4\k\n\8\g\w\y\l\1\m\4\c\0\2\7\g\u\7\j\e\u\m\a\9\e\x ]] 00:05:57.227 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:57.227 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:57.227 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:57.227 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:57.487 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:57.487 12:13:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:57.487 [2024-12-06 12:13:43.948024] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:57.487 [2024-12-06 12:13:43.948288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60309 ] 00:05:57.487 [2024-12-06 12:13:44.092385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.487 [2024-12-06 12:13:44.126686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.745 [2024-12-06 12:13:44.158016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.745  [2024-12-06T12:13:44.403Z] Copying: 512/512 [B] (average 500 kBps) 00:05:57.745 00:05:57.745 12:13:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wunvpdfo98p0w6hfpkqvlo927ftxewi1s2soh9twwlm26rn0m2grfd4ljdyorj9rxdlllz4745xcs5wywnffcnodiyh186dgf07v0n6l0p3yd1v0pw2tbjzhhezubjgfzfmx515pm8ih6jljgotdlhkokeytgps56s04ezsssiuwubfiroh8rxb5k03fcj6vsa8y7kezt5gtxb3jsetq4jb5m4z7w1r63b1ve0t0mahsbj01kj8wif58zvnsil2bvmtenpz8tezkag315u3p13wns6n2oturb37syse1xy8d0m9hrferwx7x7ea2817j2wfpo10j3d9zn3eas9bf3u4nzzdkhwgptz7oq0ci3vhfh8a75cwoe82ys3b03115a56m9dm098w716h24v4a5vwgddg7azks6mszol0zcoqgq34wnqpz4jyodd5fpk4x1qcby9qor3wtsqncrvgphd0blh4q1njgl5x79s57a5qg9q4aoy8rehej56neixwm == \w\u\n\v\p\d\f\o\9\8\p\0\w\6\h\f\p\k\q\v\l\o\9\2\7\f\t\x\e\w\i\1\s\2\s\o\h\9\t\w\w\l\m\2\6\r\n\0\m\2\g\r\f\d\4\l\j\d\y\o\r\j\9\r\x\d\l\l\l\z\4\7\4\5\x\c\s\5\w\y\w\n\f\f\c\n\o\d\i\y\h\1\8\6\d\g\f\0\7\v\0\n\6\l\0\p\3\y\d\1\v\0\p\w\2\t\b\j\z\h\h\e\z\u\b\j\g\f\z\f\m\x\5\1\5\p\m\8\i\h\6\j\l\j\g\o\t\d\l\h\k\o\k\e\y\t\g\p\s\5\6\s\0\4\e\z\s\s\s\i\u\w\u\b\f\i\r\o\h\8\r\x\b\5\k\0\3\f\c\j\6\v\s\a\8\y\7\k\e\z\t\5\g\t\x\b\3\j\s\e\t\q\4\j\b\5\m\4\z\7\w\1\r\6\3\b\1\v\e\0\t\0\m\a\h\s\b\j\0\1\k\j\8\w\i\f\5\8\z\v\n\s\i\l\2\b\v\m\t\e\n\p\z\8\t\e\z\k\a\g\3\1\5\u\3\p\1\3\w\n\s\6\n\2\o\t\u\r\b\3\7\s\y\s\e\1\x\y\8\d\0\m\9\h\r\f\e\r\w\x\7\x\7\e\a\2\8\1\7\j\2\w\f\p\o\1\0\j\3\d\9\z\n\3\e\a\s\9\b\f\3\u\4\n\z\z\d\k\h\w\g\p\t\z\7\o\q\0\c\i\3\v\h\f\h\8\a\7\5\c\w\o\e\8\2\y\s\3\b\0\3\1\1\5\a\5\6\m\9\d\m\0\9\8\w\7\1\6\h\2\4\v\4\a\5\v\w\g\d\d\g\7\a\z\k\s\6\m\s\z\o\l\0\z\c\o\q\g\q\3\4\w\n\q\p\z\4\j\y\o\d\d\5\f\p\k\4\x\1\q\c\b\y\9\q\o\r\3\w\t\s\q\n\c\r\v\g\p\h\d\0\b\l\h\4\q\1\n\j\g\l\5\x\7\9\s\5\7\a\5\q\g\9\q\4\a\o\y\8\r\e\h\e\j\5\6\n\e\i\x\w\m ]] 00:05:57.745 12:13:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:57.745 12:13:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:57.745 [2024-12-06 12:13:44.367238] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:57.745 [2024-12-06 12:13:44.367329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60322 ] 00:05:58.003 [2024-12-06 12:13:44.512169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.003 [2024-12-06 12:13:44.542861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.003 [2024-12-06 12:13:44.574364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.003  [2024-12-06T12:13:44.920Z] Copying: 512/512 [B] (average 500 kBps) 00:05:58.262 00:05:58.262 12:13:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wunvpdfo98p0w6hfpkqvlo927ftxewi1s2soh9twwlm26rn0m2grfd4ljdyorj9rxdlllz4745xcs5wywnffcnodiyh186dgf07v0n6l0p3yd1v0pw2tbjzhhezubjgfzfmx515pm8ih6jljgotdlhkokeytgps56s04ezsssiuwubfiroh8rxb5k03fcj6vsa8y7kezt5gtxb3jsetq4jb5m4z7w1r63b1ve0t0mahsbj01kj8wif58zvnsil2bvmtenpz8tezkag315u3p13wns6n2oturb37syse1xy8d0m9hrferwx7x7ea2817j2wfpo10j3d9zn3eas9bf3u4nzzdkhwgptz7oq0ci3vhfh8a75cwoe82ys3b03115a56m9dm098w716h24v4a5vwgddg7azks6mszol0zcoqgq34wnqpz4jyodd5fpk4x1qcby9qor3wtsqncrvgphd0blh4q1njgl5x79s57a5qg9q4aoy8rehej56neixwm == \w\u\n\v\p\d\f\o\9\8\p\0\w\6\h\f\p\k\q\v\l\o\9\2\7\f\t\x\e\w\i\1\s\2\s\o\h\9\t\w\w\l\m\2\6\r\n\0\m\2\g\r\f\d\4\l\j\d\y\o\r\j\9\r\x\d\l\l\l\z\4\7\4\5\x\c\s\5\w\y\w\n\f\f\c\n\o\d\i\y\h\1\8\6\d\g\f\0\7\v\0\n\6\l\0\p\3\y\d\1\v\0\p\w\2\t\b\j\z\h\h\e\z\u\b\j\g\f\z\f\m\x\5\1\5\p\m\8\i\h\6\j\l\j\g\o\t\d\l\h\k\o\k\e\y\t\g\p\s\5\6\s\0\4\e\z\s\s\s\i\u\w\u\b\f\i\r\o\h\8\r\x\b\5\k\0\3\f\c\j\6\v\s\a\8\y\7\k\e\z\t\5\g\t\x\b\3\j\s\e\t\q\4\j\b\5\m\4\z\7\w\1\r\6\3\b\1\v\e\0\t\0\m\a\h\s\b\j\0\1\k\j\8\w\i\f\5\8\z\v\n\s\i\l\2\b\v\m\t\e\n\p\z\8\t\e\z\k\a\g\3\1\5\u\3\p\1\3\w\n\s\6\n\2\o\t\u\r\b\3\7\s\y\s\e\1\x\y\8\d\0\m\9\h\r\f\e\r\w\x\7\x\7\e\a\2\8\1\7\j\2\w\f\p\o\1\0\j\3\d\9\z\n\3\e\a\s\9\b\f\3\u\4\n\z\z\d\k\h\w\g\p\t\z\7\o\q\0\c\i\3\v\h\f\h\8\a\7\5\c\w\o\e\8\2\y\s\3\b\0\3\1\1\5\a\5\6\m\9\d\m\0\9\8\w\7\1\6\h\2\4\v\4\a\5\v\w\g\d\d\g\7\a\z\k\s\6\m\s\z\o\l\0\z\c\o\q\g\q\3\4\w\n\q\p\z\4\j\y\o\d\d\5\f\p\k\4\x\1\q\c\b\y\9\q\o\r\3\w\t\s\q\n\c\r\v\g\p\h\d\0\b\l\h\4\q\1\n\j\g\l\5\x\7\9\s\5\7\a\5\q\g\9\q\4\a\o\y\8\r\e\h\e\j\5\6\n\e\i\x\w\m ]] 00:05:58.262 12:13:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:58.262 12:13:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:58.262 [2024-12-06 12:13:44.781661] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:58.262 [2024-12-06 12:13:44.781752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:05:58.525 [2024-12-06 12:13:44.925186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.525 [2024-12-06 12:13:44.954171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.525 [2024-12-06 12:13:44.983070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.525  [2024-12-06T12:13:45.183Z] Copying: 512/512 [B] (average 500 kBps) 00:05:58.525 00:05:58.525 12:13:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wunvpdfo98p0w6hfpkqvlo927ftxewi1s2soh9twwlm26rn0m2grfd4ljdyorj9rxdlllz4745xcs5wywnffcnodiyh186dgf07v0n6l0p3yd1v0pw2tbjzhhezubjgfzfmx515pm8ih6jljgotdlhkokeytgps56s04ezsssiuwubfiroh8rxb5k03fcj6vsa8y7kezt5gtxb3jsetq4jb5m4z7w1r63b1ve0t0mahsbj01kj8wif58zvnsil2bvmtenpz8tezkag315u3p13wns6n2oturb37syse1xy8d0m9hrferwx7x7ea2817j2wfpo10j3d9zn3eas9bf3u4nzzdkhwgptz7oq0ci3vhfh8a75cwoe82ys3b03115a56m9dm098w716h24v4a5vwgddg7azks6mszol0zcoqgq34wnqpz4jyodd5fpk4x1qcby9qor3wtsqncrvgphd0blh4q1njgl5x79s57a5qg9q4aoy8rehej56neixwm == \w\u\n\v\p\d\f\o\9\8\p\0\w\6\h\f\p\k\q\v\l\o\9\2\7\f\t\x\e\w\i\1\s\2\s\o\h\9\t\w\w\l\m\2\6\r\n\0\m\2\g\r\f\d\4\l\j\d\y\o\r\j\9\r\x\d\l\l\l\z\4\7\4\5\x\c\s\5\w\y\w\n\f\f\c\n\o\d\i\y\h\1\8\6\d\g\f\0\7\v\0\n\6\l\0\p\3\y\d\1\v\0\p\w\2\t\b\j\z\h\h\e\z\u\b\j\g\f\z\f\m\x\5\1\5\p\m\8\i\h\6\j\l\j\g\o\t\d\l\h\k\o\k\e\y\t\g\p\s\5\6\s\0\4\e\z\s\s\s\i\u\w\u\b\f\i\r\o\h\8\r\x\b\5\k\0\3\f\c\j\6\v\s\a\8\y\7\k\e\z\t\5\g\t\x\b\3\j\s\e\t\q\4\j\b\5\m\4\z\7\w\1\r\6\3\b\1\v\e\0\t\0\m\a\h\s\b\j\0\1\k\j\8\w\i\f\5\8\z\v\n\s\i\l\2\b\v\m\t\e\n\p\z\8\t\e\z\k\a\g\3\1\5\u\3\p\1\3\w\n\s\6\n\2\o\t\u\r\b\3\7\s\y\s\e\1\x\y\8\d\0\m\9\h\r\f\e\r\w\x\7\x\7\e\a\2\8\1\7\j\2\w\f\p\o\1\0\j\3\d\9\z\n\3\e\a\s\9\b\f\3\u\4\n\z\z\d\k\h\w\g\p\t\z\7\o\q\0\c\i\3\v\h\f\h\8\a\7\5\c\w\o\e\8\2\y\s\3\b\0\3\1\1\5\a\5\6\m\9\d\m\0\9\8\w\7\1\6\h\2\4\v\4\a\5\v\w\g\d\d\g\7\a\z\k\s\6\m\s\z\o\l\0\z\c\o\q\g\q\3\4\w\n\q\p\z\4\j\y\o\d\d\5\f\p\k\4\x\1\q\c\b\y\9\q\o\r\3\w\t\s\q\n\c\r\v\g\p\h\d\0\b\l\h\4\q\1\n\j\g\l\5\x\7\9\s\5\7\a\5\q\g\9\q\4\a\o\y\8\r\e\h\e\j\5\6\n\e\i\x\w\m ]] 00:05:58.525 12:13:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:58.525 12:13:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:58.783 [2024-12-06 12:13:45.187214] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:58.783 [2024-12-06 12:13:45.187302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:05:58.783 [2024-12-06 12:13:45.332045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.783 [2024-12-06 12:13:45.358534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.783 [2024-12-06 12:13:45.384106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.783  [2024-12-06T12:13:45.699Z] Copying: 512/512 [B] (average 500 kBps) 00:05:59.041 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wunvpdfo98p0w6hfpkqvlo927ftxewi1s2soh9twwlm26rn0m2grfd4ljdyorj9rxdlllz4745xcs5wywnffcnodiyh186dgf07v0n6l0p3yd1v0pw2tbjzhhezubjgfzfmx515pm8ih6jljgotdlhkokeytgps56s04ezsssiuwubfiroh8rxb5k03fcj6vsa8y7kezt5gtxb3jsetq4jb5m4z7w1r63b1ve0t0mahsbj01kj8wif58zvnsil2bvmtenpz8tezkag315u3p13wns6n2oturb37syse1xy8d0m9hrferwx7x7ea2817j2wfpo10j3d9zn3eas9bf3u4nzzdkhwgptz7oq0ci3vhfh8a75cwoe82ys3b03115a56m9dm098w716h24v4a5vwgddg7azks6mszol0zcoqgq34wnqpz4jyodd5fpk4x1qcby9qor3wtsqncrvgphd0blh4q1njgl5x79s57a5qg9q4aoy8rehej56neixwm == \w\u\n\v\p\d\f\o\9\8\p\0\w\6\h\f\p\k\q\v\l\o\9\2\7\f\t\x\e\w\i\1\s\2\s\o\h\9\t\w\w\l\m\2\6\r\n\0\m\2\g\r\f\d\4\l\j\d\y\o\r\j\9\r\x\d\l\l\l\z\4\7\4\5\x\c\s\5\w\y\w\n\f\f\c\n\o\d\i\y\h\1\8\6\d\g\f\0\7\v\0\n\6\l\0\p\3\y\d\1\v\0\p\w\2\t\b\j\z\h\h\e\z\u\b\j\g\f\z\f\m\x\5\1\5\p\m\8\i\h\6\j\l\j\g\o\t\d\l\h\k\o\k\e\y\t\g\p\s\5\6\s\0\4\e\z\s\s\s\i\u\w\u\b\f\i\r\o\h\8\r\x\b\5\k\0\3\f\c\j\6\v\s\a\8\y\7\k\e\z\t\5\g\t\x\b\3\j\s\e\t\q\4\j\b\5\m\4\z\7\w\1\r\6\3\b\1\v\e\0\t\0\m\a\h\s\b\j\0\1\k\j\8\w\i\f\5\8\z\v\n\s\i\l\2\b\v\m\t\e\n\p\z\8\t\e\z\k\a\g\3\1\5\u\3\p\1\3\w\n\s\6\n\2\o\t\u\r\b\3\7\s\y\s\e\1\x\y\8\d\0\m\9\h\r\f\e\r\w\x\7\x\7\e\a\2\8\1\7\j\2\w\f\p\o\1\0\j\3\d\9\z\n\3\e\a\s\9\b\f\3\u\4\n\z\z\d\k\h\w\g\p\t\z\7\o\q\0\c\i\3\v\h\f\h\8\a\7\5\c\w\o\e\8\2\y\s\3\b\0\3\1\1\5\a\5\6\m\9\d\m\0\9\8\w\7\1\6\h\2\4\v\4\a\5\v\w\g\d\d\g\7\a\z\k\s\6\m\s\z\o\l\0\z\c\o\q\g\q\3\4\w\n\q\p\z\4\j\y\o\d\d\5\f\p\k\4\x\1\q\c\b\y\9\q\o\r\3\w\t\s\q\n\c\r\v\g\p\h\d\0\b\l\h\4\q\1\n\j\g\l\5\x\7\9\s\5\7\a\5\q\g\9\q\4\a\o\y\8\r\e\h\e\j\5\6\n\e\i\x\w\m ]] 00:05:59.041 00:05:59.041 real 0m3.268s 00:05:59.041 user 0m1.601s 00:05:59.041 sys 0m0.709s 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.041 ************************************ 00:05:59.041 END TEST dd_flags_misc_forced_aio 00:05:59.041 ************************************ 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:59.041 ************************************ 00:05:59.041 END TEST spdk_dd_posix 00:05:59.041 ************************************ 00:05:59.041 00:05:59.041 real 0m15.539s 00:05:59.041 user 0m6.560s 00:05:59.041 sys 0m4.268s 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.041 12:13:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:59.041 12:13:45 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:59.041 12:13:45 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.041 12:13:45 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.041 12:13:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:59.041 ************************************ 00:05:59.041 START TEST spdk_dd_malloc 00:05:59.041 ************************************ 00:05:59.041 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:59.299 * Looking for test storage... 00:05:59.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.299 --rc genhtml_branch_coverage=1 00:05:59.299 --rc genhtml_function_coverage=1 00:05:59.299 --rc genhtml_legend=1 00:05:59.299 --rc geninfo_all_blocks=1 00:05:59.299 --rc geninfo_unexecuted_blocks=1 00:05:59.299 00:05:59.299 ' 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.299 12:13:45 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:59.300 ************************************ 00:05:59.300 START TEST dd_malloc_copy 00:05:59.300 ************************************ 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:59.300 12:13:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:59.300 [2024-12-06 12:13:45.912121] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:59.300 [2024-12-06 12:13:45.912399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:05:59.300 { 00:05:59.300 "subsystems": [ 00:05:59.300 { 00:05:59.300 "subsystem": "bdev", 00:05:59.300 "config": [ 00:05:59.300 { 00:05:59.300 "params": { 00:05:59.300 "block_size": 512, 00:05:59.300 "num_blocks": 1048576, 00:05:59.300 "name": "malloc0" 00:05:59.300 }, 00:05:59.300 "method": "bdev_malloc_create" 00:05:59.300 }, 00:05:59.300 { 00:05:59.300 "params": { 00:05:59.300 "block_size": 512, 00:05:59.300 "num_blocks": 1048576, 00:05:59.300 "name": "malloc1" 00:05:59.300 }, 00:05:59.300 "method": "bdev_malloc_create" 00:05:59.300 }, 00:05:59.300 { 00:05:59.300 "method": "bdev_wait_for_examine" 00:05:59.300 } 00:05:59.300 ] 00:05:59.300 } 00:05:59.300 ] 00:05:59.300 } 00:05:59.558 [2024-12-06 12:13:46.058620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.558 [2024-12-06 12:13:46.086269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.558 [2024-12-06 12:13:46.113782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.937  [2024-12-06T12:13:48.534Z] Copying: 245/512 [MB] (245 MBps) [2024-12-06T12:13:48.534Z] Copying: 492/512 [MB] (246 MBps) [2024-12-06T12:13:48.793Z] Copying: 512/512 [MB] (average 246 MBps) 00:06:02.135 00:06:02.135 12:13:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:02.135 12:13:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:02.135 12:13:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:02.135 12:13:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:02.135 [2024-12-06 12:13:48.720362] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:02.135 [2024-12-06 12:13:48.720616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60450 ] 00:06:02.135 { 00:06:02.135 "subsystems": [ 00:06:02.135 { 00:06:02.135 "subsystem": "bdev", 00:06:02.135 "config": [ 00:06:02.135 { 00:06:02.135 "params": { 00:06:02.135 "block_size": 512, 00:06:02.135 "num_blocks": 1048576, 00:06:02.135 "name": "malloc0" 00:06:02.135 }, 00:06:02.135 "method": "bdev_malloc_create" 00:06:02.135 }, 00:06:02.135 { 00:06:02.135 "params": { 00:06:02.135 "block_size": 512, 00:06:02.135 "num_blocks": 1048576, 00:06:02.135 "name": "malloc1" 00:06:02.135 }, 00:06:02.135 "method": "bdev_malloc_create" 00:06:02.135 }, 00:06:02.135 { 00:06:02.135 "method": "bdev_wait_for_examine" 00:06:02.135 } 00:06:02.135 ] 00:06:02.135 } 00:06:02.135 ] 00:06:02.135 } 00:06:02.394 [2024-12-06 12:13:48.863733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.394 [2024-12-06 12:13:48.894441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.394 [2024-12-06 12:13:48.926366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.772  [2024-12-06T12:13:51.366Z] Copying: 246/512 [MB] (246 MBps) [2024-12-06T12:13:51.366Z] Copying: 495/512 [MB] (248 MBps) [2024-12-06T12:13:51.625Z] Copying: 512/512 [MB] (average 247 MBps) 00:06:04.967 00:06:04.967 00:06:04.967 real 0m5.619s 00:06:04.967 user 0m5.010s 00:06:04.967 sys 0m0.472s 00:06:04.967 12:13:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.967 ************************************ 00:06:04.967 END TEST dd_malloc_copy 00:06:04.967 ************************************ 00:06:04.967 12:13:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:04.967 ************************************ 00:06:04.967 END TEST spdk_dd_malloc 00:06:04.967 ************************************ 00:06:04.967 00:06:04.967 real 0m5.869s 00:06:04.967 user 0m5.163s 00:06:04.967 sys 0m0.570s 00:06:04.967 12:13:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.967 12:13:51 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:04.967 12:13:51 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:04.967 12:13:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.967 12:13:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.967 12:13:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:04.967 ************************************ 00:06:04.967 START TEST spdk_dd_bdev_to_bdev 00:06:04.967 ************************************ 00:06:04.967 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:05.227 * Looking for test storage... 00:06:05.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.227 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.228 --rc genhtml_branch_coverage=1 00:06:05.228 --rc genhtml_function_coverage=1 00:06:05.228 --rc genhtml_legend=1 00:06:05.228 --rc geninfo_all_blocks=1 00:06:05.228 --rc geninfo_unexecuted_blocks=1 00:06:05.228 00:06:05.228 ' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.228 --rc genhtml_branch_coverage=1 00:06:05.228 --rc genhtml_function_coverage=1 00:06:05.228 --rc genhtml_legend=1 00:06:05.228 --rc geninfo_all_blocks=1 00:06:05.228 --rc geninfo_unexecuted_blocks=1 00:06:05.228 00:06:05.228 ' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.228 --rc genhtml_branch_coverage=1 00:06:05.228 --rc genhtml_function_coverage=1 00:06:05.228 --rc genhtml_legend=1 00:06:05.228 --rc geninfo_all_blocks=1 00:06:05.228 --rc geninfo_unexecuted_blocks=1 00:06:05.228 00:06:05.228 ' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.228 --rc genhtml_branch_coverage=1 00:06:05.228 --rc genhtml_function_coverage=1 00:06:05.228 --rc genhtml_legend=1 00:06:05.228 --rc geninfo_all_blocks=1 00:06:05.228 --rc geninfo_unexecuted_blocks=1 00:06:05.228 00:06:05.228 ' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.228 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:05.228 ************************************ 00:06:05.228 START TEST dd_inflate_file 00:06:05.228 ************************************ 00:06:05.229 12:13:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:05.229 [2024-12-06 12:13:51.846440] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:05.229 [2024-12-06 12:13:51.846707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 00:06:05.488 [2024-12-06 12:13:51.999246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.488 [2024-12-06 12:13:52.039333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.488 [2024-12-06 12:13:52.076471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.488  [2024-12-06T12:13:52.405Z] Copying: 64/64 [MB] (average 1560 MBps) 00:06:05.747 00:06:05.747 00:06:05.747 real 0m0.474s 00:06:05.747 user 0m0.264s 00:06:05.747 sys 0m0.240s 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:05.747 ************************************ 00:06:05.747 END TEST dd_inflate_file 00:06:05.747 ************************************ 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:05.747 ************************************ 00:06:05.747 START TEST dd_copy_to_out_bdev 00:06:05.747 ************************************ 00:06:05.747 12:13:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:05.747 { 00:06:05.747 "subsystems": [ 00:06:05.747 { 00:06:05.747 "subsystem": "bdev", 00:06:05.747 "config": [ 00:06:05.747 { 00:06:05.747 "params": { 00:06:05.747 "trtype": "pcie", 00:06:05.747 "traddr": "0000:00:10.0", 00:06:05.747 "name": "Nvme0" 00:06:05.747 }, 00:06:05.747 "method": "bdev_nvme_attach_controller" 00:06:05.747 }, 00:06:05.747 { 00:06:05.747 "params": { 00:06:05.747 "trtype": "pcie", 00:06:05.747 "traddr": "0000:00:11.0", 00:06:05.747 "name": "Nvme1" 00:06:05.747 }, 00:06:05.747 "method": "bdev_nvme_attach_controller" 00:06:05.747 }, 00:06:05.747 { 00:06:05.747 "method": "bdev_wait_for_examine" 00:06:05.747 } 00:06:05.747 ] 00:06:05.747 } 00:06:05.747 ] 00:06:05.747 } 00:06:05.747 [2024-12-06 12:13:52.369382] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:05.747 [2024-12-06 12:13:52.369476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60585 ] 00:06:06.007 [2024-12-06 12:13:52.508074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.007 [2024-12-06 12:13:52.537233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.007 [2024-12-06 12:13:52.566561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.386  [2024-12-06T12:13:54.044Z] Copying: 50/64 [MB] (50 MBps) [2024-12-06T12:13:54.303Z] Copying: 64/64 [MB] (average 50 MBps) 00:06:07.645 00:06:07.645 00:06:07.645 real 0m1.822s 00:06:07.645 user 0m1.644s 00:06:07.645 sys 0m1.501s 00:06:07.645 ************************************ 00:06:07.645 END TEST dd_copy_to_out_bdev 00:06:07.645 ************************************ 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:07.645 ************************************ 00:06:07.645 START TEST dd_offset_magic 00:06:07.645 ************************************ 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:07.645 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:07.646 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:07.646 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:07.646 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:07.646 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:07.646 { 00:06:07.646 "subsystems": [ 00:06:07.646 { 00:06:07.646 "subsystem": "bdev", 00:06:07.646 "config": [ 00:06:07.646 { 00:06:07.646 "params": { 00:06:07.646 "trtype": "pcie", 00:06:07.646 "traddr": "0000:00:10.0", 00:06:07.646 "name": "Nvme0" 00:06:07.646 }, 00:06:07.646 "method": "bdev_nvme_attach_controller" 00:06:07.646 }, 00:06:07.646 { 00:06:07.646 "params": { 00:06:07.646 "trtype": "pcie", 00:06:07.646 "traddr": "0000:00:11.0", 00:06:07.646 "name": "Nvme1" 00:06:07.646 }, 00:06:07.646 "method": "bdev_nvme_attach_controller" 00:06:07.646 }, 00:06:07.646 { 00:06:07.646 "method": "bdev_wait_for_examine" 00:06:07.646 } 00:06:07.646 ] 00:06:07.646 } 00:06:07.646 ] 00:06:07.646 } 00:06:07.646 [2024-12-06 12:13:54.246139] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:07.646 [2024-12-06 12:13:54.246258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60632 ] 00:06:07.905 [2024-12-06 12:13:54.392562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.905 [2024-12-06 12:13:54.425136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.905 [2024-12-06 12:13:54.454533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.164  [2024-12-06T12:13:55.082Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:08.424 00:06:08.424 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:08.424 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:08.424 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:08.424 12:13:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:08.424 { 00:06:08.424 "subsystems": [ 00:06:08.424 { 00:06:08.424 "subsystem": "bdev", 00:06:08.424 "config": [ 00:06:08.424 { 00:06:08.424 "params": { 00:06:08.424 "trtype": "pcie", 00:06:08.424 "traddr": "0000:00:10.0", 00:06:08.424 "name": "Nvme0" 00:06:08.424 }, 00:06:08.424 "method": "bdev_nvme_attach_controller" 00:06:08.424 }, 00:06:08.424 { 00:06:08.424 "params": { 00:06:08.424 "trtype": "pcie", 00:06:08.424 "traddr": "0000:00:11.0", 00:06:08.424 "name": "Nvme1" 00:06:08.424 }, 00:06:08.424 "method": "bdev_nvme_attach_controller" 00:06:08.424 }, 00:06:08.424 { 00:06:08.424 "method": "bdev_wait_for_examine" 00:06:08.424 } 00:06:08.424 ] 00:06:08.424 } 00:06:08.424 ] 00:06:08.424 } 00:06:08.424 [2024-12-06 12:13:54.886822] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:08.424 [2024-12-06 12:13:54.887294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60652 ] 00:06:08.424 [2024-12-06 12:13:55.029817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.424 [2024-12-06 12:13:55.057604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.689 [2024-12-06 12:13:55.088483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.689  [2024-12-06T12:13:55.625Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:08.967 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:08.967 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:08.967 [2024-12-06 12:13:55.404818] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:08.967 [2024-12-06 12:13:55.405067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:06:08.967 { 00:06:08.967 "subsystems": [ 00:06:08.967 { 00:06:08.967 "subsystem": "bdev", 00:06:08.967 "config": [ 00:06:08.967 { 00:06:08.967 "params": { 00:06:08.967 "trtype": "pcie", 00:06:08.967 "traddr": "0000:00:10.0", 00:06:08.967 "name": "Nvme0" 00:06:08.967 }, 00:06:08.967 "method": "bdev_nvme_attach_controller" 00:06:08.967 }, 00:06:08.967 { 00:06:08.967 "params": { 00:06:08.967 "trtype": "pcie", 00:06:08.967 "traddr": "0000:00:11.0", 00:06:08.967 "name": "Nvme1" 00:06:08.967 }, 00:06:08.967 "method": "bdev_nvme_attach_controller" 00:06:08.967 }, 00:06:08.967 { 00:06:08.967 "method": "bdev_wait_for_examine" 00:06:08.967 } 00:06:08.967 ] 00:06:08.967 } 00:06:08.967 ] 00:06:08.967 } 00:06:08.967 [2024-12-06 12:13:55.544793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.968 [2024-12-06 12:13:55.572546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.968 [2024-12-06 12:13:55.600138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.236  [2024-12-06T12:13:56.152Z] Copying: 65/65 [MB] (average 1048 MBps) 00:06:09.494 00:06:09.494 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:09.494 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:09.494 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:09.494 12:13:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:09.494 [2024-12-06 12:13:56.014475] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:09.494 [2024-12-06 12:13:56.014558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60683 ] 00:06:09.494 { 00:06:09.494 "subsystems": [ 00:06:09.494 { 00:06:09.494 "subsystem": "bdev", 00:06:09.494 "config": [ 00:06:09.494 { 00:06:09.494 "params": { 00:06:09.494 "trtype": "pcie", 00:06:09.494 "traddr": "0000:00:10.0", 00:06:09.494 "name": "Nvme0" 00:06:09.494 }, 00:06:09.494 "method": "bdev_nvme_attach_controller" 00:06:09.494 }, 00:06:09.494 { 00:06:09.494 "params": { 00:06:09.494 "trtype": "pcie", 00:06:09.494 "traddr": "0000:00:11.0", 00:06:09.494 "name": "Nvme1" 00:06:09.494 }, 00:06:09.494 "method": "bdev_nvme_attach_controller" 00:06:09.494 }, 00:06:09.494 { 00:06:09.494 "method": "bdev_wait_for_examine" 00:06:09.494 } 00:06:09.494 ] 00:06:09.494 } 00:06:09.494 ] 00:06:09.494 } 00:06:09.494 [2024-12-06 12:13:56.143936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.751 [2024-12-06 12:13:56.173478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.751 [2024-12-06 12:13:56.200804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.751  [2024-12-06T12:13:56.667Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:10.009 00:06:10.009 ************************************ 00:06:10.009 END TEST dd_offset_magic 00:06:10.009 ************************************ 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:10.009 00:06:10.009 real 0m2.291s 00:06:10.009 user 0m1.688s 00:06:10.009 sys 0m0.558s 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:10.009 12:13:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:10.009 [2024-12-06 12:13:56.580258] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:10.009 [2024-12-06 12:13:56.580365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60715 ] 00:06:10.009 { 00:06:10.009 "subsystems": [ 00:06:10.009 { 00:06:10.009 "subsystem": "bdev", 00:06:10.009 "config": [ 00:06:10.009 { 00:06:10.009 "params": { 00:06:10.009 "trtype": "pcie", 00:06:10.009 "traddr": "0000:00:10.0", 00:06:10.009 "name": "Nvme0" 00:06:10.009 }, 00:06:10.009 "method": "bdev_nvme_attach_controller" 00:06:10.009 }, 00:06:10.009 { 00:06:10.009 "params": { 00:06:10.009 "trtype": "pcie", 00:06:10.009 "traddr": "0000:00:11.0", 00:06:10.009 "name": "Nvme1" 00:06:10.009 }, 00:06:10.009 "method": "bdev_nvme_attach_controller" 00:06:10.009 }, 00:06:10.009 { 00:06:10.009 "method": "bdev_wait_for_examine" 00:06:10.009 } 00:06:10.009 ] 00:06:10.009 } 00:06:10.009 ] 00:06:10.009 } 00:06:10.267 [2024-12-06 12:13:56.721343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.267 [2024-12-06 12:13:56.751951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.267 [2024-12-06 12:13:56.781049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.526  [2024-12-06T12:13:57.184Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:10.526 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:10.526 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:10.526 [2024-12-06 12:13:57.113297] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:10.526 [2024-12-06 12:13:57.113552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:06:10.526 { 00:06:10.526 "subsystems": [ 00:06:10.526 { 00:06:10.526 "subsystem": "bdev", 00:06:10.526 "config": [ 00:06:10.526 { 00:06:10.526 "params": { 00:06:10.526 "trtype": "pcie", 00:06:10.526 "traddr": "0000:00:10.0", 00:06:10.526 "name": "Nvme0" 00:06:10.526 }, 00:06:10.526 "method": "bdev_nvme_attach_controller" 00:06:10.526 }, 00:06:10.526 { 00:06:10.526 "params": { 00:06:10.526 "trtype": "pcie", 00:06:10.526 "traddr": "0000:00:11.0", 00:06:10.526 "name": "Nvme1" 00:06:10.526 }, 00:06:10.526 "method": "bdev_nvme_attach_controller" 00:06:10.526 }, 00:06:10.526 { 00:06:10.526 "method": "bdev_wait_for_examine" 00:06:10.526 } 00:06:10.526 ] 00:06:10.526 } 00:06:10.526 ] 00:06:10.526 } 00:06:10.786 [2024-12-06 12:13:57.255358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.786 [2024-12-06 12:13:57.286250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.786 [2024-12-06 12:13:57.318954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.045  [2024-12-06T12:13:57.703Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:11.045 00:06:11.046 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:11.046 00:06:11.046 real 0m6.046s 00:06:11.046 user 0m4.542s 00:06:11.046 sys 0m2.823s 00:06:11.046 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.046 12:13:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:11.046 ************************************ 00:06:11.046 END TEST spdk_dd_bdev_to_bdev 00:06:11.046 ************************************ 00:06:11.046 12:13:57 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:11.046 12:13:57 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:11.046 12:13:57 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.046 12:13:57 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.046 12:13:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:11.046 ************************************ 00:06:11.046 START TEST spdk_dd_uring 00:06:11.046 ************************************ 00:06:11.046 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:11.306 * Looking for test storage... 00:06:11.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.306 --rc genhtml_branch_coverage=1 00:06:11.306 --rc genhtml_function_coverage=1 00:06:11.306 --rc genhtml_legend=1 00:06:11.306 --rc geninfo_all_blocks=1 00:06:11.306 --rc geninfo_unexecuted_blocks=1 00:06:11.306 00:06:11.306 ' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.306 --rc genhtml_branch_coverage=1 00:06:11.306 --rc genhtml_function_coverage=1 00:06:11.306 --rc genhtml_legend=1 00:06:11.306 --rc geninfo_all_blocks=1 00:06:11.306 --rc geninfo_unexecuted_blocks=1 00:06:11.306 00:06:11.306 ' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.306 --rc genhtml_branch_coverage=1 00:06:11.306 --rc genhtml_function_coverage=1 00:06:11.306 --rc genhtml_legend=1 00:06:11.306 --rc geninfo_all_blocks=1 00:06:11.306 --rc geninfo_unexecuted_blocks=1 00:06:11.306 00:06:11.306 ' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.306 --rc genhtml_branch_coverage=1 00:06:11.306 --rc genhtml_function_coverage=1 00:06:11.306 --rc genhtml_legend=1 00:06:11.306 --rc geninfo_all_blocks=1 00:06:11.306 --rc geninfo_unexecuted_blocks=1 00:06:11.306 00:06:11.306 ' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:11.306 ************************************ 00:06:11.306 START TEST dd_uring_copy 00:06:11.306 ************************************ 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:11.306 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=zjk4aqbht7zhjbb9lf91c4ousiwjhnjyrhvn4acfvvs2ljtywyq16e6nv62qhfnedp5667je637tg2fql1j3jz330mbc70co34ghmjwci2qk83dqjx7fsymdez842hb1rbkl0psqlz5cv4qx467lxzjpz7obrjv3n1tbac7z7o8py6g6tmbkoxp75fwjnhujn6sflp3ak83av0c3lrh6xk5vny77gjndddgmlslxl9p578xamtgs75ig7bfo320cih62g6zuz4p0ra30c0fsai4sj9s1hudqu06j4490h10v6yl5hb9uf6ql0z4prkqj2dj9hllq7ss93xpewbleax1ngtzp8zsl3bndqs3oszc1oscqsmjo0gywz504zgcatt70txgfluqsxsfnzhrm2ig3v3k2800k7c5jd9s8fjm2mlosnhwfhxfa35lnhc16dzkzrm7xyn6bs6srgfkxok6gj909ypxd92mxlhor7stodsmgktxxa72yo5rkxiym55dql0l0a9noyrnl2mg67tr9sqy8xhayz3soheqrcrz84l5rlyht31tp31sq3fhiorrckgqlbvrl0d4rrce5mmu294qc57znwpf79s50vkjxlts3bne7488lpcbq0ldmv7gvnojr2pfu3ayumeueehtppigdu8nz8c9qsclguv0lywna6pifi8kplueglgnbrgcka4xvtsqti251ewd5zwsfwhumkwv2j54qinb7s6bpll3r5yp45o968czi2fr5cd2ou7bghtvgntv1qr904zquf9772fwcaomgn8u3139alai3s1qjaz2g9sp0whh8b2n4ud6bcmtrakmzegd0jpk20o94goe4vvr5bexk0th8wvyp7zow97ek5br79gjwpfkyiry1mpdetyvksuf7ddutroczwg3g0m4wlyto5oarcbs99exjpn5ejh0lum2pogxh7v4wm2mzw9i0lcctsfwj1x43qyzq2xhtgawrf20digdwgoxhf0qku8iwj7e3 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo zjk4aqbht7zhjbb9lf91c4ousiwjhnjyrhvn4acfvvs2ljtywyq16e6nv62qhfnedp5667je637tg2fql1j3jz330mbc70co34ghmjwci2qk83dqjx7fsymdez842hb1rbkl0psqlz5cv4qx467lxzjpz7obrjv3n1tbac7z7o8py6g6tmbkoxp75fwjnhujn6sflp3ak83av0c3lrh6xk5vny77gjndddgmlslxl9p578xamtgs75ig7bfo320cih62g6zuz4p0ra30c0fsai4sj9s1hudqu06j4490h10v6yl5hb9uf6ql0z4prkqj2dj9hllq7ss93xpewbleax1ngtzp8zsl3bndqs3oszc1oscqsmjo0gywz504zgcatt70txgfluqsxsfnzhrm2ig3v3k2800k7c5jd9s8fjm2mlosnhwfhxfa35lnhc16dzkzrm7xyn6bs6srgfkxok6gj909ypxd92mxlhor7stodsmgktxxa72yo5rkxiym55dql0l0a9noyrnl2mg67tr9sqy8xhayz3soheqrcrz84l5rlyht31tp31sq3fhiorrckgqlbvrl0d4rrce5mmu294qc57znwpf79s50vkjxlts3bne7488lpcbq0ldmv7gvnojr2pfu3ayumeueehtppigdu8nz8c9qsclguv0lywna6pifi8kplueglgnbrgcka4xvtsqti251ewd5zwsfwhumkwv2j54qinb7s6bpll3r5yp45o968czi2fr5cd2ou7bghtvgntv1qr904zquf9772fwcaomgn8u3139alai3s1qjaz2g9sp0whh8b2n4ud6bcmtrakmzegd0jpk20o94goe4vvr5bexk0th8wvyp7zow97ek5br79gjwpfkyiry1mpdetyvksuf7ddutroczwg3g0m4wlyto5oarcbs99exjpn5ejh0lum2pogxh7v4wm2mzw9i0lcctsfwj1x43qyzq2xhtgawrf20digdwgoxhf0qku8iwj7e3 00:06:11.307 12:13:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:11.307 [2024-12-06 12:13:57.952613] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:11.307 [2024-12-06 12:13:57.953073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60808 ] 00:06:11.567 [2024-12-06 12:13:58.093740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.567 [2024-12-06 12:13:58.122741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.567 [2024-12-06 12:13:58.151186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.135  [2024-12-06T12:13:59.053Z] Copying: 511/511 [MB] (average 1372 MBps) 00:06:12.395 00:06:12.395 12:13:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:12.395 12:13:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:12.395 12:13:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:12.395 12:13:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.395 [2024-12-06 12:13:58.905572] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:12.395 [2024-12-06 12:13:58.905674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60824 ] 00:06:12.395 { 00:06:12.395 "subsystems": [ 00:06:12.395 { 00:06:12.395 "subsystem": "bdev", 00:06:12.395 "config": [ 00:06:12.395 { 00:06:12.395 "params": { 00:06:12.395 "block_size": 512, 00:06:12.395 "num_blocks": 1048576, 00:06:12.395 "name": "malloc0" 00:06:12.395 }, 00:06:12.395 "method": "bdev_malloc_create" 00:06:12.395 }, 00:06:12.395 { 00:06:12.395 "params": { 00:06:12.395 "filename": "/dev/zram1", 00:06:12.395 "name": "uring0" 00:06:12.395 }, 00:06:12.395 "method": "bdev_uring_create" 00:06:12.395 }, 00:06:12.395 { 00:06:12.395 "method": "bdev_wait_for_examine" 00:06:12.395 } 00:06:12.395 ] 00:06:12.395 } 00:06:12.395 ] 00:06:12.395 } 00:06:12.395 [2024-12-06 12:13:59.044348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.655 [2024-12-06 12:13:59.074333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.655 [2024-12-06 12:13:59.102762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.592  [2024-12-06T12:14:01.629Z] Copying: 255/512 [MB] (255 MBps) [2024-12-06T12:14:01.629Z] Copying: 512/512 [MB] (average 257 MBps) 00:06:14.971 00:06:14.971 12:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:14.971 12:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:14.971 12:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:14.971 12:14:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.971 [2024-12-06 12:14:01.474442] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:14.971 [2024-12-06 12:14:01.474546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:06:14.971 { 00:06:14.971 "subsystems": [ 00:06:14.971 { 00:06:14.971 "subsystem": "bdev", 00:06:14.971 "config": [ 00:06:14.972 { 00:06:14.972 "params": { 00:06:14.972 "block_size": 512, 00:06:14.972 "num_blocks": 1048576, 00:06:14.972 "name": "malloc0" 00:06:14.972 }, 00:06:14.972 "method": "bdev_malloc_create" 00:06:14.972 }, 00:06:14.972 { 00:06:14.972 "params": { 00:06:14.972 "filename": "/dev/zram1", 00:06:14.972 "name": "uring0" 00:06:14.972 }, 00:06:14.972 "method": "bdev_uring_create" 00:06:14.972 }, 00:06:14.972 { 00:06:14.972 "method": "bdev_wait_for_examine" 00:06:14.972 } 00:06:14.972 ] 00:06:14.972 } 00:06:14.972 ] 00:06:14.972 } 00:06:14.972 [2024-12-06 12:14:01.618741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.231 [2024-12-06 12:14:01.650610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.231 [2024-12-06 12:14:01.681795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.168  [2024-12-06T12:14:04.205Z] Copying: 179/512 [MB] (179 MBps) [2024-12-06T12:14:04.773Z] Copying: 372/512 [MB] (192 MBps) [2024-12-06T12:14:05.032Z] Copying: 512/512 [MB] (average 180 MBps) 00:06:18.374 00:06:18.374 12:14:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:18.374 12:14:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ zjk4aqbht7zhjbb9lf91c4ousiwjhnjyrhvn4acfvvs2ljtywyq16e6nv62qhfnedp5667je637tg2fql1j3jz330mbc70co34ghmjwci2qk83dqjx7fsymdez842hb1rbkl0psqlz5cv4qx467lxzjpz7obrjv3n1tbac7z7o8py6g6tmbkoxp75fwjnhujn6sflp3ak83av0c3lrh6xk5vny77gjndddgmlslxl9p578xamtgs75ig7bfo320cih62g6zuz4p0ra30c0fsai4sj9s1hudqu06j4490h10v6yl5hb9uf6ql0z4prkqj2dj9hllq7ss93xpewbleax1ngtzp8zsl3bndqs3oszc1oscqsmjo0gywz504zgcatt70txgfluqsxsfnzhrm2ig3v3k2800k7c5jd9s8fjm2mlosnhwfhxfa35lnhc16dzkzrm7xyn6bs6srgfkxok6gj909ypxd92mxlhor7stodsmgktxxa72yo5rkxiym55dql0l0a9noyrnl2mg67tr9sqy8xhayz3soheqrcrz84l5rlyht31tp31sq3fhiorrckgqlbvrl0d4rrce5mmu294qc57znwpf79s50vkjxlts3bne7488lpcbq0ldmv7gvnojr2pfu3ayumeueehtppigdu8nz8c9qsclguv0lywna6pifi8kplueglgnbrgcka4xvtsqti251ewd5zwsfwhumkwv2j54qinb7s6bpll3r5yp45o968czi2fr5cd2ou7bghtvgntv1qr904zquf9772fwcaomgn8u3139alai3s1qjaz2g9sp0whh8b2n4ud6bcmtrakmzegd0jpk20o94goe4vvr5bexk0th8wvyp7zow97ek5br79gjwpfkyiry1mpdetyvksuf7ddutroczwg3g0m4wlyto5oarcbs99exjpn5ejh0lum2pogxh7v4wm2mzw9i0lcctsfwj1x43qyzq2xhtgawrf20digdwgoxhf0qku8iwj7e3 == \z\j\k\4\a\q\b\h\t\7\z\h\j\b\b\9\l\f\9\1\c\4\o\u\s\i\w\j\h\n\j\y\r\h\v\n\4\a\c\f\v\v\s\2\l\j\t\y\w\y\q\1\6\e\6\n\v\6\2\q\h\f\n\e\d\p\5\6\6\7\j\e\6\3\7\t\g\2\f\q\l\1\j\3\j\z\3\3\0\m\b\c\7\0\c\o\3\4\g\h\m\j\w\c\i\2\q\k\8\3\d\q\j\x\7\f\s\y\m\d\e\z\8\4\2\h\b\1\r\b\k\l\0\p\s\q\l\z\5\c\v\4\q\x\4\6\7\l\x\z\j\p\z\7\o\b\r\j\v\3\n\1\t\b\a\c\7\z\7\o\8\p\y\6\g\6\t\m\b\k\o\x\p\7\5\f\w\j\n\h\u\j\n\6\s\f\l\p\3\a\k\8\3\a\v\0\c\3\l\r\h\6\x\k\5\v\n\y\7\7\g\j\n\d\d\d\g\m\l\s\l\x\l\9\p\5\7\8\x\a\m\t\g\s\7\5\i\g\7\b\f\o\3\2\0\c\i\h\6\2\g\6\z\u\z\4\p\0\r\a\3\0\c\0\f\s\a\i\4\s\j\9\s\1\h\u\d\q\u\0\6\j\4\4\9\0\h\1\0\v\6\y\l\5\h\b\9\u\f\6\q\l\0\z\4\p\r\k\q\j\2\d\j\9\h\l\l\q\7\s\s\9\3\x\p\e\w\b\l\e\a\x\1\n\g\t\z\p\8\z\s\l\3\b\n\d\q\s\3\o\s\z\c\1\o\s\c\q\s\m\j\o\0\g\y\w\z\5\0\4\z\g\c\a\t\t\7\0\t\x\g\f\l\u\q\s\x\s\f\n\z\h\r\m\2\i\g\3\v\3\k\2\8\0\0\k\7\c\5\j\d\9\s\8\f\j\m\2\m\l\o\s\n\h\w\f\h\x\f\a\3\5\l\n\h\c\1\6\d\z\k\z\r\m\7\x\y\n\6\b\s\6\s\r\g\f\k\x\o\k\6\g\j\9\0\9\y\p\x\d\9\2\m\x\l\h\o\r\7\s\t\o\d\s\m\g\k\t\x\x\a\7\2\y\o\5\r\k\x\i\y\m\5\5\d\q\l\0\l\0\a\9\n\o\y\r\n\l\2\m\g\6\7\t\r\9\s\q\y\8\x\h\a\y\z\3\s\o\h\e\q\r\c\r\z\8\4\l\5\r\l\y\h\t\3\1\t\p\3\1\s\q\3\f\h\i\o\r\r\c\k\g\q\l\b\v\r\l\0\d\4\r\r\c\e\5\m\m\u\2\9\4\q\c\5\7\z\n\w\p\f\7\9\s\5\0\v\k\j\x\l\t\s\3\b\n\e\7\4\8\8\l\p\c\b\q\0\l\d\m\v\7\g\v\n\o\j\r\2\p\f\u\3\a\y\u\m\e\u\e\e\h\t\p\p\i\g\d\u\8\n\z\8\c\9\q\s\c\l\g\u\v\0\l\y\w\n\a\6\p\i\f\i\8\k\p\l\u\e\g\l\g\n\b\r\g\c\k\a\4\x\v\t\s\q\t\i\2\5\1\e\w\d\5\z\w\s\f\w\h\u\m\k\w\v\2\j\5\4\q\i\n\b\7\s\6\b\p\l\l\3\r\5\y\p\4\5\o\9\6\8\c\z\i\2\f\r\5\c\d\2\o\u\7\b\g\h\t\v\g\n\t\v\1\q\r\9\0\4\z\q\u\f\9\7\7\2\f\w\c\a\o\m\g\n\8\u\3\1\3\9\a\l\a\i\3\s\1\q\j\a\z\2\g\9\s\p\0\w\h\h\8\b\2\n\4\u\d\6\b\c\m\t\r\a\k\m\z\e\g\d\0\j\p\k\2\0\o\9\4\g\o\e\4\v\v\r\5\b\e\x\k\0\t\h\8\w\v\y\p\7\z\o\w\9\7\e\k\5\b\r\7\9\g\j\w\p\f\k\y\i\r\y\1\m\p\d\e\t\y\v\k\s\u\f\7\d\d\u\t\r\o\c\z\w\g\3\g\0\m\4\w\l\y\t\o\5\o\a\r\c\b\s\9\9\e\x\j\p\n\5\e\j\h\0\l\u\m\2\p\o\g\x\h\7\v\4\w\m\2\m\z\w\9\i\0\l\c\c\t\s\f\w\j\1\x\4\3\q\y\z\q\2\x\h\t\g\a\w\r\f\2\0\d\i\g\d\w\g\o\x\h\f\0\q\k\u\8\i\w\j\7\e\3 ]] 00:06:18.374 12:14:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:18.375 12:14:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ zjk4aqbht7zhjbb9lf91c4ousiwjhnjyrhvn4acfvvs2ljtywyq16e6nv62qhfnedp5667je637tg2fql1j3jz330mbc70co34ghmjwci2qk83dqjx7fsymdez842hb1rbkl0psqlz5cv4qx467lxzjpz7obrjv3n1tbac7z7o8py6g6tmbkoxp75fwjnhujn6sflp3ak83av0c3lrh6xk5vny77gjndddgmlslxl9p578xamtgs75ig7bfo320cih62g6zuz4p0ra30c0fsai4sj9s1hudqu06j4490h10v6yl5hb9uf6ql0z4prkqj2dj9hllq7ss93xpewbleax1ngtzp8zsl3bndqs3oszc1oscqsmjo0gywz504zgcatt70txgfluqsxsfnzhrm2ig3v3k2800k7c5jd9s8fjm2mlosnhwfhxfa35lnhc16dzkzrm7xyn6bs6srgfkxok6gj909ypxd92mxlhor7stodsmgktxxa72yo5rkxiym55dql0l0a9noyrnl2mg67tr9sqy8xhayz3soheqrcrz84l5rlyht31tp31sq3fhiorrckgqlbvrl0d4rrce5mmu294qc57znwpf79s50vkjxlts3bne7488lpcbq0ldmv7gvnojr2pfu3ayumeueehtppigdu8nz8c9qsclguv0lywna6pifi8kplueglgnbrgcka4xvtsqti251ewd5zwsfwhumkwv2j54qinb7s6bpll3r5yp45o968czi2fr5cd2ou7bghtvgntv1qr904zquf9772fwcaomgn8u3139alai3s1qjaz2g9sp0whh8b2n4ud6bcmtrakmzegd0jpk20o94goe4vvr5bexk0th8wvyp7zow97ek5br79gjwpfkyiry1mpdetyvksuf7ddutroczwg3g0m4wlyto5oarcbs99exjpn5ejh0lum2pogxh7v4wm2mzw9i0lcctsfwj1x43qyzq2xhtgawrf20digdwgoxhf0qku8iwj7e3 == \z\j\k\4\a\q\b\h\t\7\z\h\j\b\b\9\l\f\9\1\c\4\o\u\s\i\w\j\h\n\j\y\r\h\v\n\4\a\c\f\v\v\s\2\l\j\t\y\w\y\q\1\6\e\6\n\v\6\2\q\h\f\n\e\d\p\5\6\6\7\j\e\6\3\7\t\g\2\f\q\l\1\j\3\j\z\3\3\0\m\b\c\7\0\c\o\3\4\g\h\m\j\w\c\i\2\q\k\8\3\d\q\j\x\7\f\s\y\m\d\e\z\8\4\2\h\b\1\r\b\k\l\0\p\s\q\l\z\5\c\v\4\q\x\4\6\7\l\x\z\j\p\z\7\o\b\r\j\v\3\n\1\t\b\a\c\7\z\7\o\8\p\y\6\g\6\t\m\b\k\o\x\p\7\5\f\w\j\n\h\u\j\n\6\s\f\l\p\3\a\k\8\3\a\v\0\c\3\l\r\h\6\x\k\5\v\n\y\7\7\g\j\n\d\d\d\g\m\l\s\l\x\l\9\p\5\7\8\x\a\m\t\g\s\7\5\i\g\7\b\f\o\3\2\0\c\i\h\6\2\g\6\z\u\z\4\p\0\r\a\3\0\c\0\f\s\a\i\4\s\j\9\s\1\h\u\d\q\u\0\6\j\4\4\9\0\h\1\0\v\6\y\l\5\h\b\9\u\f\6\q\l\0\z\4\p\r\k\q\j\2\d\j\9\h\l\l\q\7\s\s\9\3\x\p\e\w\b\l\e\a\x\1\n\g\t\z\p\8\z\s\l\3\b\n\d\q\s\3\o\s\z\c\1\o\s\c\q\s\m\j\o\0\g\y\w\z\5\0\4\z\g\c\a\t\t\7\0\t\x\g\f\l\u\q\s\x\s\f\n\z\h\r\m\2\i\g\3\v\3\k\2\8\0\0\k\7\c\5\j\d\9\s\8\f\j\m\2\m\l\o\s\n\h\w\f\h\x\f\a\3\5\l\n\h\c\1\6\d\z\k\z\r\m\7\x\y\n\6\b\s\6\s\r\g\f\k\x\o\k\6\g\j\9\0\9\y\p\x\d\9\2\m\x\l\h\o\r\7\s\t\o\d\s\m\g\k\t\x\x\a\7\2\y\o\5\r\k\x\i\y\m\5\5\d\q\l\0\l\0\a\9\n\o\y\r\n\l\2\m\g\6\7\t\r\9\s\q\y\8\x\h\a\y\z\3\s\o\h\e\q\r\c\r\z\8\4\l\5\r\l\y\h\t\3\1\t\p\3\1\s\q\3\f\h\i\o\r\r\c\k\g\q\l\b\v\r\l\0\d\4\r\r\c\e\5\m\m\u\2\9\4\q\c\5\7\z\n\w\p\f\7\9\s\5\0\v\k\j\x\l\t\s\3\b\n\e\7\4\8\8\l\p\c\b\q\0\l\d\m\v\7\g\v\n\o\j\r\2\p\f\u\3\a\y\u\m\e\u\e\e\h\t\p\p\i\g\d\u\8\n\z\8\c\9\q\s\c\l\g\u\v\0\l\y\w\n\a\6\p\i\f\i\8\k\p\l\u\e\g\l\g\n\b\r\g\c\k\a\4\x\v\t\s\q\t\i\2\5\1\e\w\d\5\z\w\s\f\w\h\u\m\k\w\v\2\j\5\4\q\i\n\b\7\s\6\b\p\l\l\3\r\5\y\p\4\5\o\9\6\8\c\z\i\2\f\r\5\c\d\2\o\u\7\b\g\h\t\v\g\n\t\v\1\q\r\9\0\4\z\q\u\f\9\7\7\2\f\w\c\a\o\m\g\n\8\u\3\1\3\9\a\l\a\i\3\s\1\q\j\a\z\2\g\9\s\p\0\w\h\h\8\b\2\n\4\u\d\6\b\c\m\t\r\a\k\m\z\e\g\d\0\j\p\k\2\0\o\9\4\g\o\e\4\v\v\r\5\b\e\x\k\0\t\h\8\w\v\y\p\7\z\o\w\9\7\e\k\5\b\r\7\9\g\j\w\p\f\k\y\i\r\y\1\m\p\d\e\t\y\v\k\s\u\f\7\d\d\u\t\r\o\c\z\w\g\3\g\0\m\4\w\l\y\t\o\5\o\a\r\c\b\s\9\9\e\x\j\p\n\5\e\j\h\0\l\u\m\2\p\o\g\x\h\7\v\4\w\m\2\m\z\w\9\i\0\l\c\c\t\s\f\w\j\1\x\4\3\q\y\z\q\2\x\h\t\g\a\w\r\f\2\0\d\i\g\d\w\g\o\x\h\f\0\q\k\u\8\i\w\j\7\e\3 ]] 00:06:18.375 12:14:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:18.634 12:14:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:18.634 12:14:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:18.634 12:14:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:18.634 12:14:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:18.634 { 00:06:18.634 "subsystems": [ 00:06:18.634 { 00:06:18.634 "subsystem": "bdev", 00:06:18.634 "config": [ 00:06:18.634 { 00:06:18.634 "params": { 00:06:18.634 "block_size": 512, 00:06:18.634 "num_blocks": 1048576, 00:06:18.634 "name": "malloc0" 00:06:18.634 }, 00:06:18.634 "method": "bdev_malloc_create" 00:06:18.634 }, 00:06:18.634 { 00:06:18.634 "params": { 00:06:18.634 "filename": "/dev/zram1", 00:06:18.634 "name": "uring0" 00:06:18.634 }, 00:06:18.634 "method": "bdev_uring_create" 00:06:18.634 }, 00:06:18.634 { 00:06:18.634 "method": "bdev_wait_for_examine" 00:06:18.634 } 00:06:18.634 ] 00:06:18.634 } 00:06:18.634 ] 00:06:18.634 } 00:06:18.634 [2024-12-06 12:14:05.218125] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:18.634 [2024-12-06 12:14:05.218239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:06:18.892 [2024-12-06 12:14:05.363712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.892 [2024-12-06 12:14:05.391240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.892 [2024-12-06 12:14:05.418407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.269  [2024-12-06T12:14:07.865Z] Copying: 184/512 [MB] (184 MBps) [2024-12-06T12:14:08.433Z] Copying: 368/512 [MB] (183 MBps) [2024-12-06T12:14:08.692Z] Copying: 512/512 [MB] (average 183 MBps) 00:06:22.034 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:22.034 12:14:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.034 [2024-12-06 12:14:08.580767] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:22.034 [2024-12-06 12:14:08.580867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60967 ] 00:06:22.034 { 00:06:22.034 "subsystems": [ 00:06:22.034 { 00:06:22.034 "subsystem": "bdev", 00:06:22.034 "config": [ 00:06:22.034 { 00:06:22.034 "params": { 00:06:22.034 "block_size": 512, 00:06:22.034 "num_blocks": 1048576, 00:06:22.034 "name": "malloc0" 00:06:22.034 }, 00:06:22.034 "method": "bdev_malloc_create" 00:06:22.034 }, 00:06:22.034 { 00:06:22.034 "params": { 00:06:22.034 "filename": "/dev/zram1", 00:06:22.034 "name": "uring0" 00:06:22.034 }, 00:06:22.034 "method": "bdev_uring_create" 00:06:22.034 }, 00:06:22.034 { 00:06:22.034 "params": { 00:06:22.034 "name": "uring0" 00:06:22.034 }, 00:06:22.034 "method": "bdev_uring_delete" 00:06:22.034 }, 00:06:22.034 { 00:06:22.034 "method": "bdev_wait_for_examine" 00:06:22.034 } 00:06:22.034 ] 00:06:22.034 } 00:06:22.034 ] 00:06:22.034 } 00:06:22.293 [2024-12-06 12:14:08.724190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.293 [2024-12-06 12:14:08.758231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.293 [2024-12-06 12:14:08.787598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.293  [2024-12-06T12:14:09.210Z] Copying: 0/0 [B] (average 0 Bps) 00:06:22.552 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.553 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:22.553 [2024-12-06 12:14:09.174826] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:22.553 [2024-12-06 12:14:09.174920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:06:22.553 { 00:06:22.553 "subsystems": [ 00:06:22.553 { 00:06:22.553 "subsystem": "bdev", 00:06:22.553 "config": [ 00:06:22.553 { 00:06:22.553 "params": { 00:06:22.553 "block_size": 512, 00:06:22.553 "num_blocks": 1048576, 00:06:22.553 "name": "malloc0" 00:06:22.553 }, 00:06:22.553 "method": "bdev_malloc_create" 00:06:22.553 }, 00:06:22.553 { 00:06:22.553 "params": { 00:06:22.553 "filename": "/dev/zram1", 00:06:22.553 "name": "uring0" 00:06:22.553 }, 00:06:22.553 "method": "bdev_uring_create" 00:06:22.553 }, 00:06:22.553 { 00:06:22.553 "params": { 00:06:22.553 "name": "uring0" 00:06:22.553 }, 00:06:22.553 "method": "bdev_uring_delete" 00:06:22.553 }, 00:06:22.553 { 00:06:22.553 "method": "bdev_wait_for_examine" 00:06:22.553 } 00:06:22.553 ] 00:06:22.553 } 00:06:22.553 ] 00:06:22.553 } 00:06:22.812 [2024-12-06 12:14:09.318867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.812 [2024-12-06 12:14:09.348307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.812 [2024-12-06 12:14:09.378195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.072 [2024-12-06 12:14:09.498271] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:23.072 [2024-12-06 12:14:09.498361] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:23.072 [2024-12-06 12:14:09.498373] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:06:23.072 [2024-12-06 12:14:09.498384] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.072 [2024-12-06 12:14:09.696997] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:23.331 12:14:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:23.589 00:06:23.589 real 0m12.202s 00:06:23.589 user 0m8.247s 00:06:23.589 sys 0m10.726s 00:06:23.589 12:14:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.589 12:14:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.589 ************************************ 00:06:23.589 END TEST dd_uring_copy 00:06:23.589 ************************************ 00:06:23.589 00:06:23.589 real 0m12.444s 00:06:23.589 user 0m8.379s 00:06:23.589 sys 0m10.839s 00:06:23.589 12:14:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.589 12:14:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:23.589 ************************************ 00:06:23.589 END TEST spdk_dd_uring 00:06:23.589 ************************************ 00:06:23.589 12:14:10 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:23.589 12:14:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.589 12:14:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.589 12:14:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:23.589 ************************************ 00:06:23.589 START TEST spdk_dd_sparse 00:06:23.589 ************************************ 00:06:23.589 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:23.589 * Looking for test storage... 00:06:23.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.848 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.849 --rc genhtml_branch_coverage=1 00:06:23.849 --rc genhtml_function_coverage=1 00:06:23.849 --rc genhtml_legend=1 00:06:23.849 --rc geninfo_all_blocks=1 00:06:23.849 --rc geninfo_unexecuted_blocks=1 00:06:23.849 00:06:23.849 ' 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.849 --rc genhtml_branch_coverage=1 00:06:23.849 --rc genhtml_function_coverage=1 00:06:23.849 --rc genhtml_legend=1 00:06:23.849 --rc geninfo_all_blocks=1 00:06:23.849 --rc geninfo_unexecuted_blocks=1 00:06:23.849 00:06:23.849 ' 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.849 --rc genhtml_branch_coverage=1 00:06:23.849 --rc genhtml_function_coverage=1 00:06:23.849 --rc genhtml_legend=1 00:06:23.849 --rc geninfo_all_blocks=1 00:06:23.849 --rc geninfo_unexecuted_blocks=1 00:06:23.849 00:06:23.849 ' 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.849 --rc genhtml_branch_coverage=1 00:06:23.849 --rc genhtml_function_coverage=1 00:06:23.849 --rc genhtml_legend=1 00:06:23.849 --rc geninfo_all_blocks=1 00:06:23.849 --rc geninfo_unexecuted_blocks=1 00:06:23.849 00:06:23.849 ' 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:23.849 1+0 records in 00:06:23.849 1+0 records out 00:06:23.849 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00591059 s, 710 MB/s 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:23.849 1+0 records in 00:06:23.849 1+0 records out 00:06:23.849 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00563644 s, 744 MB/s 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:23.849 1+0 records in 00:06:23.849 1+0 records out 00:06:23.849 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00398977 s, 1.1 GB/s 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:23.849 ************************************ 00:06:23.849 START TEST dd_sparse_file_to_file 00:06:23.849 ************************************ 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:23.849 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:23.849 [2024-12-06 12:14:10.451613] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:23.849 [2024-12-06 12:14:10.451735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:06:23.849 { 00:06:23.849 "subsystems": [ 00:06:23.849 { 00:06:23.849 "subsystem": "bdev", 00:06:23.849 "config": [ 00:06:23.849 { 00:06:23.849 "params": { 00:06:23.849 "block_size": 4096, 00:06:23.849 "filename": "dd_sparse_aio_disk", 00:06:23.849 "name": "dd_aio" 00:06:23.849 }, 00:06:23.849 "method": "bdev_aio_create" 00:06:23.849 }, 00:06:23.849 { 00:06:23.849 "params": { 00:06:23.849 "lvs_name": "dd_lvstore", 00:06:23.849 "bdev_name": "dd_aio" 00:06:23.849 }, 00:06:23.849 "method": "bdev_lvol_create_lvstore" 00:06:23.849 }, 00:06:23.849 { 00:06:23.849 "method": "bdev_wait_for_examine" 00:06:23.849 } 00:06:23.849 ] 00:06:23.849 } 00:06:23.849 ] 00:06:23.849 } 00:06:24.108 [2024-12-06 12:14:10.589976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.108 [2024-12-06 12:14:10.617960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.108 [2024-12-06 12:14:10.645283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.108  [2024-12-06T12:14:11.025Z] Copying: 12/36 [MB] (average 1000 MBps) 00:06:24.367 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:24.367 00:06:24.367 real 0m0.494s 00:06:24.367 user 0m0.302s 00:06:24.367 sys 0m0.237s 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.367 ************************************ 00:06:24.367 END TEST dd_sparse_file_to_file 00:06:24.367 ************************************ 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:24.367 ************************************ 00:06:24.367 START TEST dd_sparse_file_to_bdev 00:06:24.367 ************************************ 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:24.367 12:14:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:24.367 [2024-12-06 12:14:10.996683] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:24.367 [2024-12-06 12:14:10.996774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61140 ] 00:06:24.367 { 00:06:24.367 "subsystems": [ 00:06:24.367 { 00:06:24.367 "subsystem": "bdev", 00:06:24.367 "config": [ 00:06:24.367 { 00:06:24.367 "params": { 00:06:24.367 "block_size": 4096, 00:06:24.367 "filename": "dd_sparse_aio_disk", 00:06:24.367 "name": "dd_aio" 00:06:24.367 }, 00:06:24.367 "method": "bdev_aio_create" 00:06:24.367 }, 00:06:24.367 { 00:06:24.367 "params": { 00:06:24.368 "lvs_name": "dd_lvstore", 00:06:24.368 "lvol_name": "dd_lvol", 00:06:24.368 "size_in_mib": 36, 00:06:24.368 "thin_provision": true 00:06:24.368 }, 00:06:24.368 "method": "bdev_lvol_create" 00:06:24.368 }, 00:06:24.368 { 00:06:24.368 "method": "bdev_wait_for_examine" 00:06:24.368 } 00:06:24.368 ] 00:06:24.368 } 00:06:24.368 ] 00:06:24.368 } 00:06:24.626 [2024-12-06 12:14:11.140226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.626 [2024-12-06 12:14:11.171070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.626 [2024-12-06 12:14:11.203187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.626  [2024-12-06T12:14:11.544Z] Copying: 12/36 [MB] (average 480 MBps) 00:06:24.886 00:06:24.886 00:06:24.886 real 0m0.471s 00:06:24.886 user 0m0.299s 00:06:24.886 sys 0m0.244s 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.886 ************************************ 00:06:24.886 END TEST dd_sparse_file_to_bdev 00:06:24.886 ************************************ 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:24.886 ************************************ 00:06:24.886 START TEST dd_sparse_bdev_to_file 00:06:24.886 ************************************ 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:24.886 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:24.886 { 00:06:24.886 "subsystems": [ 00:06:24.886 { 00:06:24.886 "subsystem": "bdev", 00:06:24.886 "config": [ 00:06:24.886 { 00:06:24.886 "params": { 00:06:24.886 "block_size": 4096, 00:06:24.886 "filename": "dd_sparse_aio_disk", 00:06:24.886 "name": "dd_aio" 00:06:24.886 }, 00:06:24.886 "method": "bdev_aio_create" 00:06:24.886 }, 00:06:24.886 { 00:06:24.886 "method": "bdev_wait_for_examine" 00:06:24.886 } 00:06:24.886 ] 00:06:24.886 } 00:06:24.886 ] 00:06:24.886 } 00:06:24.886 [2024-12-06 12:14:11.521801] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:24.886 [2024-12-06 12:14:11.522366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61172 ] 00:06:25.145 [2024-12-06 12:14:11.662166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.145 [2024-12-06 12:14:11.692558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.145 [2024-12-06 12:14:11.724433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.145  [2024-12-06T12:14:12.063Z] Copying: 12/36 [MB] (average 923 MBps) 00:06:25.405 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:25.405 00:06:25.405 real 0m0.478s 00:06:25.405 user 0m0.284s 00:06:25.405 sys 0m0.243s 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.405 ************************************ 00:06:25.405 END TEST dd_sparse_bdev_to_file 00:06:25.405 ************************************ 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:25.405 12:14:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:25.405 00:06:25.405 real 0m1.846s 00:06:25.405 user 0m1.068s 00:06:25.405 sys 0m0.937s 00:06:25.405 12:14:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.405 ************************************ 00:06:25.405 END TEST spdk_dd_sparse 00:06:25.405 ************************************ 00:06:25.405 12:14:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 12:14:12 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:25.405 12:14:12 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.405 12:14:12 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.405 12:14:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 ************************************ 00:06:25.405 START TEST spdk_dd_negative 00:06:25.405 ************************************ 00:06:25.405 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:25.665 * Looking for test storage... 00:06:25.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.665 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.666 --rc genhtml_branch_coverage=1 00:06:25.666 --rc genhtml_function_coverage=1 00:06:25.666 --rc genhtml_legend=1 00:06:25.666 --rc geninfo_all_blocks=1 00:06:25.666 --rc geninfo_unexecuted_blocks=1 00:06:25.666 00:06:25.666 ' 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.666 --rc genhtml_branch_coverage=1 00:06:25.666 --rc genhtml_function_coverage=1 00:06:25.666 --rc genhtml_legend=1 00:06:25.666 --rc geninfo_all_blocks=1 00:06:25.666 --rc geninfo_unexecuted_blocks=1 00:06:25.666 00:06:25.666 ' 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.666 --rc genhtml_branch_coverage=1 00:06:25.666 --rc genhtml_function_coverage=1 00:06:25.666 --rc genhtml_legend=1 00:06:25.666 --rc geninfo_all_blocks=1 00:06:25.666 --rc geninfo_unexecuted_blocks=1 00:06:25.666 00:06:25.666 ' 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.666 --rc genhtml_branch_coverage=1 00:06:25.666 --rc genhtml_function_coverage=1 00:06:25.666 --rc genhtml_legend=1 00:06:25.666 --rc geninfo_all_blocks=1 00:06:25.666 --rc geninfo_unexecuted_blocks=1 00:06:25.666 00:06:25.666 ' 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:25.666 ************************************ 00:06:25.666 START TEST dd_invalid_arguments 00:06:25.666 ************************************ 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.666 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:25.666 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:25.666 00:06:25.666 CPU options: 00:06:25.666 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:25.666 (like [0,1,10]) 00:06:25.666 --lcores lcore to CPU mapping list. The list is in the format: 00:06:25.666 [<,lcores[@CPUs]>...] 00:06:25.666 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:25.666 Within the group, '-' is used for range separator, 00:06:25.666 ',' is used for single number separator. 00:06:25.666 '( )' can be omitted for single element group, 00:06:25.666 '@' can be omitted if cpus and lcores have the same value 00:06:25.666 --disable-cpumask-locks Disable CPU core lock files. 00:06:25.666 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:25.666 pollers in the app support interrupt mode) 00:06:25.666 -p, --main-core main (primary) core for DPDK 00:06:25.666 00:06:25.666 Configuration options: 00:06:25.666 -c, --config, --json JSON config file 00:06:25.666 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:25.666 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:25.666 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:25.666 --rpcs-allowed comma-separated list of permitted RPCS 00:06:25.666 --json-ignore-init-errors don't exit on invalid config entry 00:06:25.666 00:06:25.666 Memory options: 00:06:25.666 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:25.666 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:25.666 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:25.666 -R, --huge-unlink unlink huge files after initialization 00:06:25.666 -n, --mem-channels number of memory channels used for DPDK 00:06:25.666 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:25.666 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:25.666 --no-huge run without using hugepages 00:06:25.666 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:25.666 -i, --shm-id shared memory ID (optional) 00:06:25.666 -g, --single-file-segments force creating just one hugetlbfs file 00:06:25.666 00:06:25.666 PCI options: 00:06:25.666 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:25.666 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:25.666 -u, --no-pci disable PCI access 00:06:25.666 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:25.666 00:06:25.666 Log options: 00:06:25.666 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:25.666 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:25.666 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:25.666 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:25.666 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:25.666 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:25.666 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:25.666 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:25.666 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:25.666 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:25.666 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:25.666 --silence-noticelog disable notice level logging to stderr 00:06:25.666 00:06:25.666 Trace options: 00:06:25.666 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:25.666 setting 0 to disable trace (default 32768) 00:06:25.667 Tracepoints vary in size and can use more than one trace entry. 00:06:25.667 -e, --tpoint-group [:] 00:06:25.667 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:25.667 [2024-12-06 12:14:12.294566] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:06:25.667 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:25.667 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:25.667 bdev_raid, scheduler, all). 00:06:25.667 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:25.667 a tracepoint group. First tpoint inside a group can be enabled by 00:06:25.667 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:25.667 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:25.667 in /include/spdk_internal/trace_defs.h 00:06:25.667 00:06:25.667 Other options: 00:06:25.667 -h, --help show this usage 00:06:25.667 -v, --version print SPDK version 00:06:25.667 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:25.667 --env-context Opaque context for use of the env implementation 00:06:25.667 00:06:25.667 Application specific: 00:06:25.667 [--------- DD Options ---------] 00:06:25.667 --if Input file. Must specify either --if or --ib. 00:06:25.667 --ib Input bdev. Must specifier either --if or --ib 00:06:25.667 --of Output file. Must specify either --of or --ob. 00:06:25.667 --ob Output bdev. Must specify either --of or --ob. 00:06:25.667 --iflag Input file flags. 00:06:25.667 --oflag Output file flags. 00:06:25.667 --bs I/O unit size (default: 4096) 00:06:25.667 --qd Queue depth (default: 2) 00:06:25.667 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:25.667 --skip Skip this many I/O units at start of input. (default: 0) 00:06:25.667 --seek Skip this many I/O units at start of output. (default: 0) 00:06:25.667 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:25.667 --sparse Enable hole skipping in input target 00:06:25.667 Available iflag and oflag values: 00:06:25.667 append - append mode 00:06:25.667 direct - use direct I/O for data 00:06:25.667 directory - fail unless a directory 00:06:25.667 dsync - use synchronized I/O for data 00:06:25.667 noatime - do not update access time 00:06:25.667 noctty - do not assign controlling terminal from file 00:06:25.667 nofollow - do not follow symlinks 00:06:25.667 nonblock - use non-blocking I/O 00:06:25.667 sync - use synchronized I/O for data and metadata 00:06:25.667 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:25.667 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.667 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.667 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.667 00:06:25.667 real 0m0.062s 00:06:25.667 user 0m0.040s 00:06:25.667 sys 0m0.020s 00:06:25.667 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.667 12:14:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:25.667 ************************************ 00:06:25.667 END TEST dd_invalid_arguments 00:06:25.667 ************************************ 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:25.926 ************************************ 00:06:25.926 START TEST dd_double_input 00:06:25.926 ************************************ 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:25.926 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:25.927 [2024-12-06 12:14:12.427949] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.927 00:06:25.927 real 0m0.096s 00:06:25.927 user 0m0.057s 00:06:25.927 sys 0m0.036s 00:06:25.927 ************************************ 00:06:25.927 END TEST dd_double_input 00:06:25.927 ************************************ 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:25.927 ************************************ 00:06:25.927 START TEST dd_double_output 00:06:25.927 ************************************ 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:25.927 [2024-12-06 12:14:12.564766] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:25.927 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.186 ************************************ 00:06:26.187 END TEST dd_double_output 00:06:26.187 ************************************ 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.187 00:06:26.187 real 0m0.075s 00:06:26.187 user 0m0.048s 00:06:26.187 sys 0m0.026s 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:26.187 ************************************ 00:06:26.187 START TEST dd_no_input 00:06:26.187 ************************************ 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:26.187 [2024-12-06 12:14:12.691625] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.187 00:06:26.187 real 0m0.075s 00:06:26.187 user 0m0.046s 00:06:26.187 sys 0m0.029s 00:06:26.187 ************************************ 00:06:26.187 END TEST dd_no_input 00:06:26.187 ************************************ 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:26.187 ************************************ 00:06:26.187 START TEST dd_no_output 00:06:26.187 ************************************ 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.187 [2024-12-06 12:14:12.819953] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.187 00:06:26.187 real 0m0.078s 00:06:26.187 user 0m0.046s 00:06:26.187 sys 0m0.030s 00:06:26.187 ************************************ 00:06:26.187 END TEST dd_no_output 00:06:26.187 ************************************ 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.187 12:14:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:26.447 ************************************ 00:06:26.447 START TEST dd_wrong_blocksize 00:06:26.447 ************************************ 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:26.447 [2024-12-06 12:14:12.950153] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.447 ************************************ 00:06:26.447 END TEST dd_wrong_blocksize 00:06:26.447 ************************************ 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.447 00:06:26.447 real 0m0.076s 00:06:26.447 user 0m0.049s 00:06:26.447 sys 0m0.026s 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.447 12:14:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:26.447 ************************************ 00:06:26.447 START TEST dd_smaller_blocksize 00:06:26.447 ************************************ 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.447 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:26.447 [2024-12-06 12:14:13.084905] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:26.447 [2024-12-06 12:14:13.085158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:06:26.706 [2024-12-06 12:14:13.236432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.706 [2024-12-06 12:14:13.278090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.706 [2024-12-06 12:14:13.315093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.965 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:27.224 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:27.224 [2024-12-06 12:14:13.787977] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:27.224 [2024-12-06 12:14:13.788156] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.224 [2024-12-06 12:14:13.851920] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:27.483 ************************************ 00:06:27.483 END TEST dd_smaller_blocksize 00:06:27.483 ************************************ 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.483 00:06:27.483 real 0m0.884s 00:06:27.483 user 0m0.325s 00:06:27.483 sys 0m0.450s 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:27.483 ************************************ 00:06:27.483 START TEST dd_invalid_count 00:06:27.483 ************************************ 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.483 12:14:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:27.483 [2024-12-06 12:14:14.020632] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.483 00:06:27.483 real 0m0.078s 00:06:27.483 user 0m0.054s 00:06:27.483 sys 0m0.023s 00:06:27.483 ************************************ 00:06:27.483 END TEST dd_invalid_count 00:06:27.483 ************************************ 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:27.483 ************************************ 00:06:27.483 START TEST dd_invalid_oflag 00:06:27.483 ************************************ 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.483 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:27.742 [2024-12-06 12:14:14.140528] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.742 00:06:27.742 real 0m0.062s 00:06:27.742 user 0m0.039s 00:06:27.742 sys 0m0.022s 00:06:27.742 ************************************ 00:06:27.742 END TEST dd_invalid_oflag 00:06:27.742 ************************************ 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:27.742 ************************************ 00:06:27.742 START TEST dd_invalid_iflag 00:06:27.742 ************************************ 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.742 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:27.743 [2024-12-06 12:14:14.267995] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:27.743 ************************************ 00:06:27.743 END TEST dd_invalid_iflag 00:06:27.743 ************************************ 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.743 00:06:27.743 real 0m0.078s 00:06:27.743 user 0m0.048s 00:06:27.743 sys 0m0.029s 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:27.743 ************************************ 00:06:27.743 START TEST dd_unknown_flag 00:06:27.743 ************************************ 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.743 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:27.743 [2024-12-06 12:14:14.386733] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:27.743 [2024-12-06 12:14:14.386812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61491 ] 00:06:28.001 [2024-12-06 12:14:14.526309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.001 [2024-12-06 12:14:14.559710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.001 [2024-12-06 12:14:14.589604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.001 [2024-12-06 12:14:14.608070] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:06:28.001 [2024-12-06 12:14:14.608127] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.001 [2024-12-06 12:14:14.608501] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:06:28.001 [2024-12-06 12:14:14.608528] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.001 [2024-12-06 12:14:14.608777] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:28.001 [2024-12-06 12:14:14.608799] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.001 [2024-12-06 12:14:14.608855] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:28.001 [2024-12-06 12:14:14.608880] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:28.260 [2024-12-06 12:14:14.668904] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.260 00:06:28.260 real 0m0.382s 00:06:28.260 user 0m0.186s 00:06:28.260 sys 0m0.098s 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:28.260 ************************************ 00:06:28.260 END TEST dd_unknown_flag 00:06:28.260 ************************************ 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:28.260 ************************************ 00:06:28.260 START TEST dd_invalid_json 00:06:28.260 ************************************ 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.260 12:14:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:28.260 [2024-12-06 12:14:14.831645] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:28.260 [2024-12-06 12:14:14.831888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61519 ] 00:06:28.519 [2024-12-06 12:14:14.976446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.519 [2024-12-06 12:14:15.009750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.519 [2024-12-06 12:14:15.009829] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:28.519 [2024-12-06 12:14:15.009849] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:28.519 [2024-12-06 12:14:15.009858] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.519 [2024-12-06 12:14:15.009893] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.519 00:06:28.519 real 0m0.295s 00:06:28.519 user 0m0.136s 00:06:28.519 sys 0m0.058s 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.519 ************************************ 00:06:28.519 END TEST dd_invalid_json 00:06:28.519 ************************************ 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:28.519 ************************************ 00:06:28.519 START TEST dd_invalid_seek 00:06:28.519 ************************************ 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.519 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:28.777 [2024-12-06 12:14:15.185182] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:28.777 [2024-12-06 12:14:15.185279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61549 ] 00:06:28.777 { 00:06:28.777 "subsystems": [ 00:06:28.777 { 00:06:28.777 "subsystem": "bdev", 00:06:28.777 "config": [ 00:06:28.777 { 00:06:28.777 "params": { 00:06:28.777 "block_size": 512, 00:06:28.777 "num_blocks": 512, 00:06:28.777 "name": "malloc0" 00:06:28.777 }, 00:06:28.777 "method": "bdev_malloc_create" 00:06:28.777 }, 00:06:28.777 { 00:06:28.777 "params": { 00:06:28.777 "block_size": 512, 00:06:28.777 "num_blocks": 512, 00:06:28.777 "name": "malloc1" 00:06:28.777 }, 00:06:28.777 "method": "bdev_malloc_create" 00:06:28.777 }, 00:06:28.777 { 00:06:28.777 "method": "bdev_wait_for_examine" 00:06:28.777 } 00:06:28.777 ] 00:06:28.777 } 00:06:28.777 ] 00:06:28.777 } 00:06:28.777 [2024-12-06 12:14:15.328164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.777 [2024-12-06 12:14:15.357917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.777 [2024-12-06 12:14:15.388467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.777 [2024-12-06 12:14:15.433021] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:28.777 [2024-12-06 12:14:15.433093] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.035 [2024-12-06 12:14:15.493250] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.035 00:06:29.035 real 0m0.423s 00:06:29.035 user 0m0.279s 00:06:29.035 sys 0m0.107s 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.035 ************************************ 00:06:29.035 END TEST dd_invalid_seek 00:06:29.035 ************************************ 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:29.035 ************************************ 00:06:29.035 START TEST dd_invalid_skip 00:06:29.035 ************************************ 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.035 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.036 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.036 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.036 12:14:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:29.036 [2024-12-06 12:14:15.654991] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:29.036 [2024-12-06 12:14:15.655090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61581 ] 00:06:29.036 { 00:06:29.036 "subsystems": [ 00:06:29.036 { 00:06:29.036 "subsystem": "bdev", 00:06:29.036 "config": [ 00:06:29.036 { 00:06:29.036 "params": { 00:06:29.036 "block_size": 512, 00:06:29.036 "num_blocks": 512, 00:06:29.036 "name": "malloc0" 00:06:29.036 }, 00:06:29.036 "method": "bdev_malloc_create" 00:06:29.036 }, 00:06:29.036 { 00:06:29.036 "params": { 00:06:29.036 "block_size": 512, 00:06:29.036 "num_blocks": 512, 00:06:29.036 "name": "malloc1" 00:06:29.036 }, 00:06:29.036 "method": "bdev_malloc_create" 00:06:29.036 }, 00:06:29.036 { 00:06:29.036 "method": "bdev_wait_for_examine" 00:06:29.036 } 00:06:29.036 ] 00:06:29.036 } 00:06:29.036 ] 00:06:29.036 } 00:06:29.293 [2024-12-06 12:14:15.799792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.293 [2024-12-06 12:14:15.831243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.293 [2024-12-06 12:14:15.863806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.293 [2024-12-06 12:14:15.909470] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:29.293 [2024-12-06 12:14:15.909553] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.551 [2024-12-06 12:14:15.972109] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.551 00:06:29.551 real 0m0.428s 00:06:29.551 user 0m0.269s 00:06:29.551 sys 0m0.121s 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.551 ************************************ 00:06:29.551 END TEST dd_invalid_skip 00:06:29.551 ************************************ 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:29.551 ************************************ 00:06:29.551 START TEST dd_invalid_input_count 00:06:29.551 ************************************ 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.551 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.552 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:29.552 [2024-12-06 12:14:16.131491] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:29.552 [2024-12-06 12:14:16.131578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61616 ] 00:06:29.552 { 00:06:29.552 "subsystems": [ 00:06:29.552 { 00:06:29.552 "subsystem": "bdev", 00:06:29.552 "config": [ 00:06:29.552 { 00:06:29.552 "params": { 00:06:29.552 "block_size": 512, 00:06:29.552 "num_blocks": 512, 00:06:29.552 "name": "malloc0" 00:06:29.552 }, 00:06:29.552 "method": "bdev_malloc_create" 00:06:29.552 }, 00:06:29.552 { 00:06:29.552 "params": { 00:06:29.552 "block_size": 512, 00:06:29.552 "num_blocks": 512, 00:06:29.552 "name": "malloc1" 00:06:29.552 }, 00:06:29.552 "method": "bdev_malloc_create" 00:06:29.552 }, 00:06:29.552 { 00:06:29.552 "method": "bdev_wait_for_examine" 00:06:29.552 } 00:06:29.552 ] 00:06:29.552 } 00:06:29.552 ] 00:06:29.552 } 00:06:29.810 [2024-12-06 12:14:16.275635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.810 [2024-12-06 12:14:16.304320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.810 [2024-12-06 12:14:16.332591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.810 [2024-12-06 12:14:16.376208] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:29.810 [2024-12-06 12:14:16.376289] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.810 [2024-12-06 12:14:16.435880] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.068 00:06:30.068 real 0m0.414s 00:06:30.068 user 0m0.272s 00:06:30.068 sys 0m0.100s 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.068 ************************************ 00:06:30.068 END TEST dd_invalid_input_count 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:30.068 ************************************ 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:30.068 ************************************ 00:06:30.068 START TEST dd_invalid_output_count 00:06:30.068 ************************************ 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.068 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.069 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:30.069 { 00:06:30.069 "subsystems": [ 00:06:30.069 { 00:06:30.069 "subsystem": "bdev", 00:06:30.069 "config": [ 00:06:30.069 { 00:06:30.069 "params": { 00:06:30.069 "block_size": 512, 00:06:30.069 "num_blocks": 512, 00:06:30.069 "name": "malloc0" 00:06:30.069 }, 00:06:30.069 "method": "bdev_malloc_create" 00:06:30.069 }, 00:06:30.069 { 00:06:30.069 "method": "bdev_wait_for_examine" 00:06:30.069 } 00:06:30.069 ] 00:06:30.069 } 00:06:30.069 ] 00:06:30.069 } 00:06:30.069 [2024-12-06 12:14:16.605358] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:30.069 [2024-12-06 12:14:16.605457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61644 ] 00:06:30.326 [2024-12-06 12:14:16.748589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.326 [2024-12-06 12:14:16.778577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.326 [2024-12-06 12:14:16.809172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.326 [2024-12-06 12:14:16.845423] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:30.326 [2024-12-06 12:14:16.845512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.326 [2024-12-06 12:14:16.905341] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.326 00:06:30.326 real 0m0.418s 00:06:30.326 user 0m0.267s 00:06:30.326 sys 0m0.108s 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.326 12:14:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:30.326 ************************************ 00:06:30.326 END TEST dd_invalid_output_count 00:06:30.326 ************************************ 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:30.584 ************************************ 00:06:30.584 START TEST dd_bs_not_multiple 00:06:30.584 ************************************ 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.584 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:30.584 [2024-12-06 12:14:17.066197] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:30.584 [2024-12-06 12:14:17.066286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61682 ] 00:06:30.584 { 00:06:30.584 "subsystems": [ 00:06:30.584 { 00:06:30.584 "subsystem": "bdev", 00:06:30.584 "config": [ 00:06:30.584 { 00:06:30.584 "params": { 00:06:30.584 "block_size": 512, 00:06:30.584 "num_blocks": 512, 00:06:30.584 "name": "malloc0" 00:06:30.584 }, 00:06:30.584 "method": "bdev_malloc_create" 00:06:30.584 }, 00:06:30.584 { 00:06:30.584 "params": { 00:06:30.584 "block_size": 512, 00:06:30.584 "num_blocks": 512, 00:06:30.584 "name": "malloc1" 00:06:30.584 }, 00:06:30.584 "method": "bdev_malloc_create" 00:06:30.584 }, 00:06:30.584 { 00:06:30.585 "method": "bdev_wait_for_examine" 00:06:30.585 } 00:06:30.585 ] 00:06:30.585 } 00:06:30.585 ] 00:06:30.585 } 00:06:30.585 [2024-12-06 12:14:17.210706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.585 [2024-12-06 12:14:17.238321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.843 [2024-12-06 12:14:17.266495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.843 [2024-12-06 12:14:17.309664] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:30.843 [2024-12-06 12:14:17.309735] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.843 [2024-12-06 12:14:17.368838] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.843 00:06:30.843 real 0m0.406s 00:06:30.843 user 0m0.268s 00:06:30.843 sys 0m0.098s 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 END TEST dd_bs_not_multiple 00:06:30.843 ************************************ 00:06:30.843 00:06:30.843 real 0m5.400s 00:06:30.843 user 0m2.823s 00:06:30.843 sys 0m1.989s 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.843 12:14:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 END TEST spdk_dd_negative 00:06:30.843 ************************************ 00:06:31.103 ************************************ 00:06:31.103 END TEST spdk_dd 00:06:31.103 ************************************ 00:06:31.103 00:06:31.103 real 1m2.058s 00:06:31.103 user 0m38.997s 00:06:31.103 sys 0m26.286s 00:06:31.103 12:14:17 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.103 12:14:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:31.103 12:14:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:31.103 12:14:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.103 12:14:17 -- common/autotest_common.sh@10 -- # set +x 00:06:31.103 12:14:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:31.103 12:14:17 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:31.103 12:14:17 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.103 12:14:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.103 12:14:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.103 12:14:17 -- common/autotest_common.sh@10 -- # set +x 00:06:31.103 ************************************ 00:06:31.103 START TEST nvmf_tcp 00:06:31.103 ************************************ 00:06:31.103 12:14:17 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.103 * Looking for test storage... 00:06:31.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:31.103 12:14:17 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.103 12:14:17 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.103 12:14:17 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.363 12:14:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.363 --rc genhtml_branch_coverage=1 00:06:31.363 --rc genhtml_function_coverage=1 00:06:31.363 --rc genhtml_legend=1 00:06:31.363 --rc geninfo_all_blocks=1 00:06:31.363 --rc geninfo_unexecuted_blocks=1 00:06:31.363 00:06:31.363 ' 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.363 --rc genhtml_branch_coverage=1 00:06:31.363 --rc genhtml_function_coverage=1 00:06:31.363 --rc genhtml_legend=1 00:06:31.363 --rc geninfo_all_blocks=1 00:06:31.363 --rc geninfo_unexecuted_blocks=1 00:06:31.363 00:06:31.363 ' 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.363 --rc genhtml_branch_coverage=1 00:06:31.363 --rc genhtml_function_coverage=1 00:06:31.363 --rc genhtml_legend=1 00:06:31.363 --rc geninfo_all_blocks=1 00:06:31.363 --rc geninfo_unexecuted_blocks=1 00:06:31.363 00:06:31.363 ' 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.363 --rc genhtml_branch_coverage=1 00:06:31.363 --rc genhtml_function_coverage=1 00:06:31.363 --rc genhtml_legend=1 00:06:31.363 --rc geninfo_all_blocks=1 00:06:31.363 --rc geninfo_unexecuted_blocks=1 00:06:31.363 00:06:31.363 ' 00:06:31.363 12:14:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:31.363 12:14:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:31.363 12:14:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.363 12:14:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.363 ************************************ 00:06:31.363 START TEST nvmf_target_core 00:06:31.363 ************************************ 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:31.363 * Looking for test storage... 00:06:31.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:31.363 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.364 --rc genhtml_branch_coverage=1 00:06:31.364 --rc genhtml_function_coverage=1 00:06:31.364 --rc genhtml_legend=1 00:06:31.364 --rc geninfo_all_blocks=1 00:06:31.364 --rc geninfo_unexecuted_blocks=1 00:06:31.364 00:06:31.364 ' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.364 --rc genhtml_branch_coverage=1 00:06:31.364 --rc genhtml_function_coverage=1 00:06:31.364 --rc genhtml_legend=1 00:06:31.364 --rc geninfo_all_blocks=1 00:06:31.364 --rc geninfo_unexecuted_blocks=1 00:06:31.364 00:06:31.364 ' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.364 --rc genhtml_branch_coverage=1 00:06:31.364 --rc genhtml_function_coverage=1 00:06:31.364 --rc genhtml_legend=1 00:06:31.364 --rc geninfo_all_blocks=1 00:06:31.364 --rc geninfo_unexecuted_blocks=1 00:06:31.364 00:06:31.364 ' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.364 --rc genhtml_branch_coverage=1 00:06:31.364 --rc genhtml_function_coverage=1 00:06:31.364 --rc genhtml_legend=1 00:06:31.364 --rc geninfo_all_blocks=1 00:06:31.364 --rc geninfo_unexecuted_blocks=1 00:06:31.364 00:06:31.364 ' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.364 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.364 ************************************ 00:06:31.364 START TEST nvmf_host_management 00:06:31.364 ************************************ 00:06:31.364 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.624 * Looking for test storage... 00:06:31.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.624 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.624 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.624 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.624 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.624 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.624 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.625 --rc genhtml_branch_coverage=1 00:06:31.625 --rc genhtml_function_coverage=1 00:06:31.625 --rc genhtml_legend=1 00:06:31.625 --rc geninfo_all_blocks=1 00:06:31.625 --rc geninfo_unexecuted_blocks=1 00:06:31.625 00:06:31.625 ' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.625 --rc genhtml_branch_coverage=1 00:06:31.625 --rc genhtml_function_coverage=1 00:06:31.625 --rc genhtml_legend=1 00:06:31.625 --rc geninfo_all_blocks=1 00:06:31.625 --rc geninfo_unexecuted_blocks=1 00:06:31.625 00:06:31.625 ' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.625 --rc genhtml_branch_coverage=1 00:06:31.625 --rc genhtml_function_coverage=1 00:06:31.625 --rc genhtml_legend=1 00:06:31.625 --rc geninfo_all_blocks=1 00:06:31.625 --rc geninfo_unexecuted_blocks=1 00:06:31.625 00:06:31.625 ' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.625 --rc genhtml_branch_coverage=1 00:06:31.625 --rc genhtml_function_coverage=1 00:06:31.625 --rc genhtml_legend=1 00:06:31.625 --rc geninfo_all_blocks=1 00:06:31.625 --rc geninfo_unexecuted_blocks=1 00:06:31.625 00:06:31.625 ' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.625 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.625 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:31.626 Cannot find device "nvmf_init_br" 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:31.626 Cannot find device "nvmf_init_br2" 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:31.626 Cannot find device "nvmf_tgt_br" 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:31.626 Cannot find device "nvmf_tgt_br2" 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:31.626 Cannot find device "nvmf_init_br" 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:31.626 Cannot find device "nvmf_init_br2" 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:31.626 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:31.886 Cannot find device "nvmf_tgt_br" 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:31.886 Cannot find device "nvmf_tgt_br2" 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:31.886 Cannot find device "nvmf_br" 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:31.886 Cannot find device "nvmf_init_if" 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:31.886 Cannot find device "nvmf_init_if2" 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:31.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:31.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:31.886 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:32.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:32.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:06:32.146 00:06:32.146 --- 10.0.0.3 ping statistics --- 00:06:32.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.146 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:32.146 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:32.146 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:06:32.146 00:06:32.146 --- 10.0.0.4 ping statistics --- 00:06:32.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.146 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:32.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:06:32.146 00:06:32.146 --- 10.0.0.1 ping statistics --- 00:06:32.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.146 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:32.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:06:32.146 00:06:32.146 --- 10.0.0.2 ping statistics --- 00:06:32.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.146 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62013 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62013 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62013 ']' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.146 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.146 [2024-12-06 12:14:18.758557] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:32.146 [2024-12-06 12:14:18.758651] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.406 [2024-12-06 12:14:18.912598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.406 [2024-12-06 12:14:18.955512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.406 [2024-12-06 12:14:18.955578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.406 [2024-12-06 12:14:18.955593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.406 [2024-12-06 12:14:18.955604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.406 [2024-12-06 12:14:18.955613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.406 [2024-12-06 12:14:18.956594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.406 [2024-12-06 12:14:18.956727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.406 [2024-12-06 12:14:18.956861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.406 [2024-12-06 12:14:18.956861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.406 [2024-12-06 12:14:18.993409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.406 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.406 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:32.406 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.406 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.406 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 [2024-12-06 12:14:19.091431] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 Malloc0 00:06:32.687 [2024-12-06 12:14:19.159917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62065 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62065 /var/tmp/bdevperf.sock 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62065 ']' 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:32.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:32.687 { 00:06:32.687 "params": { 00:06:32.687 "name": "Nvme$subsystem", 00:06:32.687 "trtype": "$TEST_TRANSPORT", 00:06:32.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:32.687 "adrfam": "ipv4", 00:06:32.687 "trsvcid": "$NVMF_PORT", 00:06:32.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:32.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:32.687 "hdgst": ${hdgst:-false}, 00:06:32.687 "ddgst": ${ddgst:-false} 00:06:32.687 }, 00:06:32.687 "method": "bdev_nvme_attach_controller" 00:06:32.687 } 00:06:32.687 EOF 00:06:32.687 )") 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:32.687 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:32.687 "params": { 00:06:32.687 "name": "Nvme0", 00:06:32.687 "trtype": "tcp", 00:06:32.687 "traddr": "10.0.0.3", 00:06:32.687 "adrfam": "ipv4", 00:06:32.687 "trsvcid": "4420", 00:06:32.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:32.687 "hdgst": false, 00:06:32.687 "ddgst": false 00:06:32.687 }, 00:06:32.687 "method": "bdev_nvme_attach_controller" 00:06:32.687 }' 00:06:32.687 [2024-12-06 12:14:19.272011] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:32.687 [2024-12-06 12:14:19.272109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62065 ] 00:06:32.981 [2024-12-06 12:14:19.426301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.981 [2024-12-06 12:14:19.465715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.981 [2024-12-06 12:14:19.509079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.981 Running I/O for 10 seconds... 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:33.241 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:33.502 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:33.502 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:33.502 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:33.502 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:33.502 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.502 12:14:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.502 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:33.502 [2024-12-06 12:14:20.074493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.502 [2024-12-06 12:14:20.074844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.502 [2024-12-06 12:14:20.074855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.074864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.074875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.074884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.074895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.074904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.074915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.074924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.074935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.074944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.074955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.074982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.074996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.503 [2024-12-06 12:14:20.075775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.503 [2024-12-06 12:14:20.075786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.075987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.075996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.076016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.504 [2024-12-06 12:14:20.076036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb4c00 is same with the state(6) to be set 00:06:33.504 [2024-12-06 12:14:20.076214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.504 [2024-12-06 12:14:20.076257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.504 [2024-12-06 12:14:20.076292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.504 [2024-12-06 12:14:20.076312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.504 [2024-12-06 12:14:20.076332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.504 [2024-12-06 12:14:20.076342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5ce0 is same with the state(6) to be set 00:06:33.504 [2024-12-06 12:14:20.077492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:33.504 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:33.504 00:06:33.504 Latency(us) 00:06:33.504 [2024-12-06T12:14:20.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.504 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:33.504 Verification LBA range: start 0x0 length 0x400 00:06:33.504 Nvme0n1 : 0.46 1517.97 94.87 138.00 0.00 37416.21 2323.55 39559.91 00:06:33.504 [2024-12-06T12:14:20.162Z] =================================================================================================================== 00:06:33.504 [2024-12-06T12:14:20.162Z] Total : 1517.97 94.87 138.00 0.00 37416.21 2323.55 39559.91 00:06:33.504 [2024-12-06 12:14:20.079479] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.504 [2024-12-06 12:14:20.079512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb5ce0 (9): Bad file descriptor 00:06:33.504 [2024-12-06 12:14:20.091938] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62065 00:06:34.441 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62065) - No such process 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:34.441 { 00:06:34.441 "params": { 00:06:34.441 "name": "Nvme$subsystem", 00:06:34.441 "trtype": "$TEST_TRANSPORT", 00:06:34.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.441 "adrfam": "ipv4", 00:06:34.441 "trsvcid": "$NVMF_PORT", 00:06:34.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.441 "hdgst": ${hdgst:-false}, 00:06:34.441 "ddgst": ${ddgst:-false} 00:06:34.441 }, 00:06:34.441 "method": "bdev_nvme_attach_controller" 00:06:34.441 } 00:06:34.441 EOF 00:06:34.441 )") 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:34.441 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:34.441 "params": { 00:06:34.441 "name": "Nvme0", 00:06:34.441 "trtype": "tcp", 00:06:34.441 "traddr": "10.0.0.3", 00:06:34.441 "adrfam": "ipv4", 00:06:34.441 "trsvcid": "4420", 00:06:34.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.441 "hdgst": false, 00:06:34.441 "ddgst": false 00:06:34.441 }, 00:06:34.441 "method": "bdev_nvme_attach_controller" 00:06:34.441 }' 00:06:34.700 [2024-12-06 12:14:21.133302] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:34.700 [2024-12-06 12:14:21.133385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62106 ] 00:06:34.700 [2024-12-06 12:14:21.280742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.700 [2024-12-06 12:14:21.310605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.700 [2024-12-06 12:14:21.348069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.959 Running I/O for 1 seconds... 00:06:35.897 1664.00 IOPS, 104.00 MiB/s 00:06:35.897 Latency(us) 00:06:35.897 [2024-12-06T12:14:22.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.897 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.897 Verification LBA range: start 0x0 length 0x400 00:06:35.897 Nvme0n1 : 1.01 1703.26 106.45 0.00 0.00 36888.16 3515.11 33363.78 00:06:35.897 [2024-12-06T12:14:22.555Z] =================================================================================================================== 00:06:35.897 [2024-12-06T12:14:22.555Z] Total : 1703.26 106.45 0.00 0.00 36888.16 3515.11 33363.78 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.157 rmmod nvme_tcp 00:06:36.157 rmmod nvme_fabrics 00:06:36.157 rmmod nvme_keyring 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62013 ']' 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62013 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62013 ']' 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62013 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62013 00:06:36.157 killing process with pid 62013 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62013' 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62013 00:06:36.157 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62013 00:06:36.416 [2024-12-06 12:14:22.860914] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:36.416 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:36.416 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:36.416 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:36.416 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:36.675 00:06:36.675 real 0m5.145s 00:06:36.675 user 0m17.978s 00:06:36.675 sys 0m1.415s 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.675 ************************************ 00:06:36.675 END TEST nvmf_host_management 00:06:36.675 ************************************ 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.675 ************************************ 00:06:36.675 START TEST nvmf_lvol 00:06:36.675 ************************************ 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:36.675 * Looking for test storage... 00:06:36.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.675 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.936 --rc genhtml_branch_coverage=1 00:06:36.936 --rc genhtml_function_coverage=1 00:06:36.936 --rc genhtml_legend=1 00:06:36.936 --rc geninfo_all_blocks=1 00:06:36.936 --rc geninfo_unexecuted_blocks=1 00:06:36.936 00:06:36.936 ' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.936 --rc genhtml_branch_coverage=1 00:06:36.936 --rc genhtml_function_coverage=1 00:06:36.936 --rc genhtml_legend=1 00:06:36.936 --rc geninfo_all_blocks=1 00:06:36.936 --rc geninfo_unexecuted_blocks=1 00:06:36.936 00:06:36.936 ' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.936 --rc genhtml_branch_coverage=1 00:06:36.936 --rc genhtml_function_coverage=1 00:06:36.936 --rc genhtml_legend=1 00:06:36.936 --rc geninfo_all_blocks=1 00:06:36.936 --rc geninfo_unexecuted_blocks=1 00:06:36.936 00:06:36.936 ' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.936 --rc genhtml_branch_coverage=1 00:06:36.936 --rc genhtml_function_coverage=1 00:06:36.936 --rc genhtml_legend=1 00:06:36.936 --rc geninfo_all_blocks=1 00:06:36.936 --rc geninfo_unexecuted_blocks=1 00:06:36.936 00:06:36.936 ' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.936 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:36.937 Cannot find device "nvmf_init_br" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:36.937 Cannot find device "nvmf_init_br2" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:36.937 Cannot find device "nvmf_tgt_br" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:36.937 Cannot find device "nvmf_tgt_br2" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:36.937 Cannot find device "nvmf_init_br" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:36.937 Cannot find device "nvmf_init_br2" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:36.937 Cannot find device "nvmf_tgt_br" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:36.937 Cannot find device "nvmf_tgt_br2" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:36.937 Cannot find device "nvmf_br" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:36.937 Cannot find device "nvmf_init_if" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:36.937 Cannot find device "nvmf_init_if2" 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:36.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:36.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:36.937 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:37.197 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:37.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:37.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:06:37.198 00:06:37.198 --- 10.0.0.3 ping statistics --- 00:06:37.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.198 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:37.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:37.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:06:37.198 00:06:37.198 --- 10.0.0.4 ping statistics --- 00:06:37.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.198 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:37.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:06:37.198 00:06:37.198 --- 10.0.0.1 ping statistics --- 00:06:37.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.198 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:37.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:06:37.198 00:06:37.198 --- 10.0.0.2 ping statistics --- 00:06:37.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.198 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62378 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62378 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62378 ']' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.198 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:37.457 [2024-12-06 12:14:23.866424] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:37.457 [2024-12-06 12:14:23.866506] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.457 [2024-12-06 12:14:24.013947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.457 [2024-12-06 12:14:24.041667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.457 [2024-12-06 12:14:24.041729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.457 [2024-12-06 12:14:24.041740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.457 [2024-12-06 12:14:24.041747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.457 [2024-12-06 12:14:24.041753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.457 [2024-12-06 12:14:24.042542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.457 [2024-12-06 12:14:24.042876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.457 [2024-12-06 12:14:24.042878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.457 [2024-12-06 12:14:24.072584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.717 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.976 [2024-12-06 12:14:24.462705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.976 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:38.236 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:38.236 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:38.495 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:38.495 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:38.754 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:39.014 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=956c8f81-de4b-4584-a279-343ec992b873 00:06:39.014 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 956c8f81-de4b-4584-a279-343ec992b873 lvol 20 00:06:39.273 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=82f9ff4d-534f-4e6f-a28b-a40d88db93e3 00:06:39.273 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:39.531 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82f9ff4d-534f-4e6f-a28b-a40d88db93e3 00:06:39.790 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:40.050 [2024-12-06 12:14:26.501859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:40.050 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:40.309 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62446 00:06:40.309 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:40.309 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:41.248 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 82f9ff4d-534f-4e6f-a28b-a40d88db93e3 MY_SNAPSHOT 00:06:41.507 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bf3b6066-8e3c-484a-8acd-eea403d80e0b 00:06:41.507 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 82f9ff4d-534f-4e6f-a28b-a40d88db93e3 30 00:06:41.767 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone bf3b6066-8e3c-484a-8acd-eea403d80e0b MY_CLONE 00:06:42.026 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=69895a78-fbfd-49ed-849c-c0767f8e2a12 00:06:42.026 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 69895a78-fbfd-49ed-849c-c0767f8e2a12 00:06:42.594 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62446 00:06:50.714 Initializing NVMe Controllers 00:06:50.714 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:50.714 Controller IO queue size 128, less than required. 00:06:50.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:50.714 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:50.714 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:50.714 Initialization complete. Launching workers. 00:06:50.714 ======================================================== 00:06:50.714 Latency(us) 00:06:50.714 Device Information : IOPS MiB/s Average min max 00:06:50.714 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11113.50 43.41 11520.29 1492.60 65098.99 00:06:50.714 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11250.20 43.95 11382.14 2801.74 68346.82 00:06:50.714 ======================================================== 00:06:50.714 Total : 22363.70 87.36 11450.80 1492.60 68346.82 00:06:50.714 00:06:50.714 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:50.714 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 82f9ff4d-534f-4e6f-a28b-a40d88db93e3 00:06:50.974 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 956c8f81-de4b-4584-a279-343ec992b873 00:06:51.233 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:51.233 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:51.233 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:51.233 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:51.233 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:51.493 rmmod nvme_tcp 00:06:51.493 rmmod nvme_fabrics 00:06:51.493 rmmod nvme_keyring 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62378 ']' 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62378 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62378 ']' 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62378 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62378 00:06:51.493 killing process with pid 62378 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62378' 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62378 00:06:51.493 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62378 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:51.493 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:06:51.753 00:06:51.753 real 0m15.165s 00:06:51.753 user 1m3.254s 00:06:51.753 sys 0m3.931s 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.753 ************************************ 00:06:51.753 END TEST nvmf_lvol 00:06:51.753 ************************************ 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.753 ************************************ 00:06:51.753 START TEST nvmf_lvs_grow 00:06:51.753 ************************************ 00:06:51.753 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:52.014 * Looking for test storage... 00:06:52.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.014 --rc genhtml_branch_coverage=1 00:06:52.014 --rc genhtml_function_coverage=1 00:06:52.014 --rc genhtml_legend=1 00:06:52.014 --rc geninfo_all_blocks=1 00:06:52.014 --rc geninfo_unexecuted_blocks=1 00:06:52.014 00:06:52.014 ' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.014 --rc genhtml_branch_coverage=1 00:06:52.014 --rc genhtml_function_coverage=1 00:06:52.014 --rc genhtml_legend=1 00:06:52.014 --rc geninfo_all_blocks=1 00:06:52.014 --rc geninfo_unexecuted_blocks=1 00:06:52.014 00:06:52.014 ' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.014 --rc genhtml_branch_coverage=1 00:06:52.014 --rc genhtml_function_coverage=1 00:06:52.014 --rc genhtml_legend=1 00:06:52.014 --rc geninfo_all_blocks=1 00:06:52.014 --rc geninfo_unexecuted_blocks=1 00:06:52.014 00:06:52.014 ' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.014 --rc genhtml_branch_coverage=1 00:06:52.014 --rc genhtml_function_coverage=1 00:06:52.014 --rc genhtml_legend=1 00:06:52.014 --rc geninfo_all_blocks=1 00:06:52.014 --rc geninfo_unexecuted_blocks=1 00:06:52.014 00:06:52.014 ' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.014 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:52.015 Cannot find device "nvmf_init_br" 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:52.015 Cannot find device "nvmf_init_br2" 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:52.015 Cannot find device "nvmf_tgt_br" 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:52.015 Cannot find device "nvmf_tgt_br2" 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:52.015 Cannot find device "nvmf_init_br" 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:52.015 Cannot find device "nvmf_init_br2" 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:06:52.015 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:52.274 Cannot find device "nvmf_tgt_br" 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:52.274 Cannot find device "nvmf_tgt_br2" 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:52.274 Cannot find device "nvmf_br" 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:52.274 Cannot find device "nvmf_init_if" 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:52.274 Cannot find device "nvmf_init_if2" 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:52.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:52.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:52.274 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:52.275 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:52.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:52.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:06:52.534 00:06:52.534 --- 10.0.0.3 ping statistics --- 00:06:52.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.534 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:52.534 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:52.534 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:06:52.534 00:06:52.534 --- 10.0.0.4 ping statistics --- 00:06:52.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.534 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:52.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:06:52.534 00:06:52.534 --- 10.0.0.1 ping statistics --- 00:06:52.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.534 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:52.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:06:52.534 00:06:52.534 --- 10.0.0.2 ping statistics --- 00:06:52.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.534 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62827 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62827 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 62827 ']' 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.534 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.535 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.535 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.535 [2024-12-06 12:14:39.028815] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:52.535 [2024-12-06 12:14:39.028886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.535 [2024-12-06 12:14:39.168441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.794 [2024-12-06 12:14:39.197162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.794 [2024-12-06 12:14:39.197237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.794 [2024-12-06 12:14:39.197263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.794 [2024-12-06 12:14:39.197270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.794 [2024-12-06 12:14:39.197276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.794 [2024-12-06 12:14:39.197583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.794 [2024-12-06 12:14:39.225993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.794 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:53.054 [2024-12-06 12:14:39.596811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.054 ************************************ 00:06:53.054 START TEST lvs_grow_clean 00:06:53.054 ************************************ 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:53.054 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:53.313 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:53.313 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:53.572 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=346efc25-0101-40f7-9a1d-9a2ba6972b95 00:06:53.572 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:06:53.572 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:53.831 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:53.831 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:53.831 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 lvol 150 00:06:54.091 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9 00:06:54.091 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:54.091 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:54.351 [2024-12-06 12:14:40.883294] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:54.351 [2024-12-06 12:14:40.883377] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:54.351 true 00:06:54.351 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:06:54.351 12:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:54.610 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:54.611 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:54.870 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9 00:06:55.129 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:55.129 [2024-12-06 12:14:41.763749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:55.129 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62902 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62902 /var/tmp/bdevperf.sock 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 62902 ']' 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.388 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:55.648 [2024-12-06 12:14:42.048189] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:55.648 [2024-12-06 12:14:42.048286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62902 ] 00:06:55.648 [2024-12-06 12:14:42.199999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.648 [2024-12-06 12:14:42.238833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.648 [2024-12-06 12:14:42.272071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.586 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.586 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:56.586 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:56.845 Nvme0n1 00:06:56.845 12:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:57.104 [ 00:06:57.104 { 00:06:57.104 "name": "Nvme0n1", 00:06:57.104 "aliases": [ 00:06:57.104 "04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9" 00:06:57.104 ], 00:06:57.104 "product_name": "NVMe disk", 00:06:57.104 "block_size": 4096, 00:06:57.105 "num_blocks": 38912, 00:06:57.105 "uuid": "04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9", 00:06:57.105 "numa_id": -1, 00:06:57.105 "assigned_rate_limits": { 00:06:57.105 "rw_ios_per_sec": 0, 00:06:57.105 "rw_mbytes_per_sec": 0, 00:06:57.105 "r_mbytes_per_sec": 0, 00:06:57.105 "w_mbytes_per_sec": 0 00:06:57.105 }, 00:06:57.105 "claimed": false, 00:06:57.105 "zoned": false, 00:06:57.105 "supported_io_types": { 00:06:57.105 "read": true, 00:06:57.105 "write": true, 00:06:57.105 "unmap": true, 00:06:57.105 "flush": true, 00:06:57.105 "reset": true, 00:06:57.105 "nvme_admin": true, 00:06:57.105 "nvme_io": true, 00:06:57.105 "nvme_io_md": false, 00:06:57.105 "write_zeroes": true, 00:06:57.105 "zcopy": false, 00:06:57.105 "get_zone_info": false, 00:06:57.105 "zone_management": false, 00:06:57.105 "zone_append": false, 00:06:57.105 "compare": true, 00:06:57.105 "compare_and_write": true, 00:06:57.105 "abort": true, 00:06:57.105 "seek_hole": false, 00:06:57.105 "seek_data": false, 00:06:57.105 "copy": true, 00:06:57.105 "nvme_iov_md": false 00:06:57.105 }, 00:06:57.105 "memory_domains": [ 00:06:57.105 { 00:06:57.105 "dma_device_id": "system", 00:06:57.105 "dma_device_type": 1 00:06:57.105 } 00:06:57.105 ], 00:06:57.105 "driver_specific": { 00:06:57.105 "nvme": [ 00:06:57.105 { 00:06:57.105 "trid": { 00:06:57.105 "trtype": "TCP", 00:06:57.105 "adrfam": "IPv4", 00:06:57.105 "traddr": "10.0.0.3", 00:06:57.105 "trsvcid": "4420", 00:06:57.105 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:57.105 }, 00:06:57.105 "ctrlr_data": { 00:06:57.105 "cntlid": 1, 00:06:57.105 "vendor_id": "0x8086", 00:06:57.105 "model_number": "SPDK bdev Controller", 00:06:57.105 "serial_number": "SPDK0", 00:06:57.105 "firmware_revision": "25.01", 00:06:57.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:57.105 "oacs": { 00:06:57.105 "security": 0, 00:06:57.105 "format": 0, 00:06:57.105 "firmware": 0, 00:06:57.105 "ns_manage": 0 00:06:57.105 }, 00:06:57.105 "multi_ctrlr": true, 00:06:57.105 "ana_reporting": false 00:06:57.105 }, 00:06:57.105 "vs": { 00:06:57.105 "nvme_version": "1.3" 00:06:57.105 }, 00:06:57.105 "ns_data": { 00:06:57.105 "id": 1, 00:06:57.105 "can_share": true 00:06:57.105 } 00:06:57.105 } 00:06:57.105 ], 00:06:57.105 "mp_policy": "active_passive" 00:06:57.105 } 00:06:57.105 } 00:06:57.105 ] 00:06:57.105 12:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:57.105 12:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62926 00:06:57.105 12:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:57.105 Running I/O for 10 seconds... 00:06:58.043 Latency(us) 00:06:58.043 [2024-12-06T12:14:44.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.043 Nvme0n1 : 1.00 6457.00 25.22 0.00 0.00 0.00 0.00 0.00 00:06:58.043 [2024-12-06T12:14:44.701Z] =================================================================================================================== 00:06:58.043 [2024-12-06T12:14:44.701Z] Total : 6457.00 25.22 0.00 0.00 0.00 0.00 0.00 00:06:58.043 00:06:58.981 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:06:59.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.239 Nvme0n1 : 2.00 6340.00 24.77 0.00 0.00 0.00 0.00 0.00 00:06:59.239 [2024-12-06T12:14:45.897Z] =================================================================================================================== 00:06:59.239 [2024-12-06T12:14:45.897Z] Total : 6340.00 24.77 0.00 0.00 0.00 0.00 0.00 00:06:59.239 00:06:59.240 true 00:06:59.498 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:06:59.498 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:59.757 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:59.757 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:59.757 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62926 00:07:00.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.016 Nvme0n1 : 3.00 6276.00 24.52 0.00 0.00 0.00 0.00 0.00 00:07:00.016 [2024-12-06T12:14:46.674Z] =================================================================================================================== 00:07:00.016 [2024-12-06T12:14:46.674Z] Total : 6276.00 24.52 0.00 0.00 0.00 0.00 0.00 00:07:00.016 00:07:01.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.396 Nvme0n1 : 4.00 6294.50 24.59 0.00 0.00 0.00 0.00 0.00 00:07:01.396 [2024-12-06T12:14:48.054Z] =================================================================================================================== 00:07:01.396 [2024-12-06T12:14:48.054Z] Total : 6294.50 24.59 0.00 0.00 0.00 0.00 0.00 00:07:01.396 00:07:02.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.336 Nvme0n1 : 5.00 6280.20 24.53 0.00 0.00 0.00 0.00 0.00 00:07:02.336 [2024-12-06T12:14:48.994Z] =================================================================================================================== 00:07:02.336 [2024-12-06T12:14:48.994Z] Total : 6280.20 24.53 0.00 0.00 0.00 0.00 0.00 00:07:02.336 00:07:03.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.305 Nvme0n1 : 6.00 6270.67 24.49 0.00 0.00 0.00 0.00 0.00 00:07:03.305 [2024-12-06T12:14:49.963Z] =================================================================================================================== 00:07:03.305 [2024-12-06T12:14:49.963Z] Total : 6270.67 24.49 0.00 0.00 0.00 0.00 0.00 00:07:03.305 00:07:04.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.245 Nvme0n1 : 7.00 6263.86 24.47 0.00 0.00 0.00 0.00 0.00 00:07:04.245 [2024-12-06T12:14:50.903Z] =================================================================================================================== 00:07:04.245 [2024-12-06T12:14:50.903Z] Total : 6263.86 24.47 0.00 0.00 0.00 0.00 0.00 00:07:04.245 00:07:05.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.181 Nvme0n1 : 8.00 6258.75 24.45 0.00 0.00 0.00 0.00 0.00 00:07:05.181 [2024-12-06T12:14:51.839Z] =================================================================================================================== 00:07:05.181 [2024-12-06T12:14:51.839Z] Total : 6258.75 24.45 0.00 0.00 0.00 0.00 0.00 00:07:05.182 00:07:06.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.121 Nvme0n1 : 9.00 6254.78 24.43 0.00 0.00 0.00 0.00 0.00 00:07:06.121 [2024-12-06T12:14:52.779Z] =================================================================================================================== 00:07:06.121 [2024-12-06T12:14:52.779Z] Total : 6254.78 24.43 0.00 0.00 0.00 0.00 0.00 00:07:06.121 00:07:07.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.060 Nvme0n1 : 10.00 6238.90 24.37 0.00 0.00 0.00 0.00 0.00 00:07:07.060 [2024-12-06T12:14:53.718Z] =================================================================================================================== 00:07:07.060 [2024-12-06T12:14:53.718Z] Total : 6238.90 24.37 0.00 0.00 0.00 0.00 0.00 00:07:07.060 00:07:07.060 00:07:07.060 Latency(us) 00:07:07.060 [2024-12-06T12:14:53.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.060 Nvme0n1 : 10.00 6249.44 24.41 0.00 0.00 20477.29 12928.47 59578.18 00:07:07.060 [2024-12-06T12:14:53.718Z] =================================================================================================================== 00:07:07.060 [2024-12-06T12:14:53.718Z] Total : 6249.44 24.41 0.00 0.00 20477.29 12928.47 59578.18 00:07:07.060 { 00:07:07.060 "results": [ 00:07:07.060 { 00:07:07.060 "job": "Nvme0n1", 00:07:07.060 "core_mask": "0x2", 00:07:07.060 "workload": "randwrite", 00:07:07.060 "status": "finished", 00:07:07.060 "queue_depth": 128, 00:07:07.060 "io_size": 4096, 00:07:07.060 "runtime": 10.003613, 00:07:07.060 "iops": 6249.442076577732, 00:07:07.060 "mibps": 24.411883111631766, 00:07:07.060 "io_failed": 0, 00:07:07.060 "io_timeout": 0, 00:07:07.060 "avg_latency_us": 20477.288023882957, 00:07:07.060 "min_latency_us": 12928.465454545454, 00:07:07.060 "max_latency_us": 59578.181818181816 00:07:07.060 } 00:07:07.060 ], 00:07:07.060 "core_count": 1 00:07:07.060 } 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62902 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 62902 ']' 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 62902 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62902 00:07:07.060 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:07.061 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:07.061 killing process with pid 62902 00:07:07.061 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62902' 00:07:07.061 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 62902 00:07:07.061 Received shutdown signal, test time was about 10.000000 seconds 00:07:07.061 00:07:07.061 Latency(us) 00:07:07.061 [2024-12-06T12:14:53.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.061 [2024-12-06T12:14:53.719Z] =================================================================================================================== 00:07:07.061 [2024-12-06T12:14:53.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:07.061 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 62902 00:07:07.320 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:07.578 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.836 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:07.836 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:08.093 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:08.093 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:08.093 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:08.351 [2024-12-06 12:14:54.856776] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:08.351 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:08.622 request: 00:07:08.622 { 00:07:08.622 "uuid": "346efc25-0101-40f7-9a1d-9a2ba6972b95", 00:07:08.622 "method": "bdev_lvol_get_lvstores", 00:07:08.622 "req_id": 1 00:07:08.622 } 00:07:08.622 Got JSON-RPC error response 00:07:08.622 response: 00:07:08.622 { 00:07:08.622 "code": -19, 00:07:08.622 "message": "No such device" 00:07:08.622 } 00:07:08.622 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:08.622 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.622 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.622 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.622 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.881 aio_bdev 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.881 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:09.140 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9 -t 2000 00:07:09.399 [ 00:07:09.399 { 00:07:09.399 "name": "04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9", 00:07:09.399 "aliases": [ 00:07:09.399 "lvs/lvol" 00:07:09.399 ], 00:07:09.399 "product_name": "Logical Volume", 00:07:09.399 "block_size": 4096, 00:07:09.399 "num_blocks": 38912, 00:07:09.399 "uuid": "04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9", 00:07:09.399 "assigned_rate_limits": { 00:07:09.399 "rw_ios_per_sec": 0, 00:07:09.399 "rw_mbytes_per_sec": 0, 00:07:09.399 "r_mbytes_per_sec": 0, 00:07:09.399 "w_mbytes_per_sec": 0 00:07:09.399 }, 00:07:09.399 "claimed": false, 00:07:09.399 "zoned": false, 00:07:09.399 "supported_io_types": { 00:07:09.399 "read": true, 00:07:09.399 "write": true, 00:07:09.399 "unmap": true, 00:07:09.399 "flush": false, 00:07:09.399 "reset": true, 00:07:09.399 "nvme_admin": false, 00:07:09.399 "nvme_io": false, 00:07:09.399 "nvme_io_md": false, 00:07:09.399 "write_zeroes": true, 00:07:09.399 "zcopy": false, 00:07:09.399 "get_zone_info": false, 00:07:09.399 "zone_management": false, 00:07:09.399 "zone_append": false, 00:07:09.399 "compare": false, 00:07:09.399 "compare_and_write": false, 00:07:09.399 "abort": false, 00:07:09.399 "seek_hole": true, 00:07:09.399 "seek_data": true, 00:07:09.399 "copy": false, 00:07:09.399 "nvme_iov_md": false 00:07:09.399 }, 00:07:09.399 "driver_specific": { 00:07:09.399 "lvol": { 00:07:09.399 "lvol_store_uuid": "346efc25-0101-40f7-9a1d-9a2ba6972b95", 00:07:09.399 "base_bdev": "aio_bdev", 00:07:09.399 "thin_provision": false, 00:07:09.399 "num_allocated_clusters": 38, 00:07:09.399 "snapshot": false, 00:07:09.399 "clone": false, 00:07:09.399 "esnap_clone": false 00:07:09.399 } 00:07:09.399 } 00:07:09.399 } 00:07:09.399 ] 00:07:09.399 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:09.399 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:09.399 12:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:09.399 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:09.399 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:09.399 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:09.672 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:09.672 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 04eb2e76-1e2e-4939-bdf3-c35bb0e9dad9 00:07:09.930 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 346efc25-0101-40f7-9a1d-9a2ba6972b95 00:07:10.188 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.446 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:10.704 ************************************ 00:07:10.704 END TEST lvs_grow_clean 00:07:10.704 ************************************ 00:07:10.704 00:07:10.704 real 0m17.713s 00:07:10.704 user 0m17.022s 00:07:10.704 sys 0m2.193s 00:07:10.704 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.704 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.963 ************************************ 00:07:10.963 START TEST lvs_grow_dirty 00:07:10.963 ************************************ 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:10.963 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.220 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:11.220 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:11.477 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:11.477 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:11.477 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:11.802 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:11.802 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:11.802 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dc77546a-f41b-4e25-8f4c-87facdac18f4 lvol 150 00:07:12.068 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:12.068 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:12.068 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.068 [2024-12-06 12:14:58.691332] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.068 [2024-12-06 12:14:58.691621] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.068 true 00:07:12.068 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:12.068 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:12.635 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:12.635 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.635 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:12.894 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:13.153 [2024-12-06 12:14:59.740103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:13.153 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:13.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63173 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63173 /var/tmp/bdevperf.sock 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63173 ']' 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.412 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.412 [2024-12-06 12:15:00.025927] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:13.412 [2024-12-06 12:15:00.026296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63173 ] 00:07:13.672 [2024-12-06 12:15:00.165413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.672 [2024-12-06 12:15:00.194997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.672 [2024-12-06 12:15:00.223182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.672 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.672 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:13.672 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.931 Nvme0n1 00:07:14.191 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.191 [ 00:07:14.191 { 00:07:14.191 "name": "Nvme0n1", 00:07:14.191 "aliases": [ 00:07:14.191 "8b2d97d2-4f32-4950-9b77-7b4832d7577b" 00:07:14.191 ], 00:07:14.191 "product_name": "NVMe disk", 00:07:14.191 "block_size": 4096, 00:07:14.191 "num_blocks": 38912, 00:07:14.191 "uuid": "8b2d97d2-4f32-4950-9b77-7b4832d7577b", 00:07:14.191 "numa_id": -1, 00:07:14.191 "assigned_rate_limits": { 00:07:14.191 "rw_ios_per_sec": 0, 00:07:14.191 "rw_mbytes_per_sec": 0, 00:07:14.191 "r_mbytes_per_sec": 0, 00:07:14.191 "w_mbytes_per_sec": 0 00:07:14.191 }, 00:07:14.191 "claimed": false, 00:07:14.191 "zoned": false, 00:07:14.191 "supported_io_types": { 00:07:14.191 "read": true, 00:07:14.191 "write": true, 00:07:14.191 "unmap": true, 00:07:14.191 "flush": true, 00:07:14.191 "reset": true, 00:07:14.191 "nvme_admin": true, 00:07:14.191 "nvme_io": true, 00:07:14.191 "nvme_io_md": false, 00:07:14.191 "write_zeroes": true, 00:07:14.191 "zcopy": false, 00:07:14.191 "get_zone_info": false, 00:07:14.191 "zone_management": false, 00:07:14.191 "zone_append": false, 00:07:14.191 "compare": true, 00:07:14.191 "compare_and_write": true, 00:07:14.191 "abort": true, 00:07:14.191 "seek_hole": false, 00:07:14.191 "seek_data": false, 00:07:14.191 "copy": true, 00:07:14.191 "nvme_iov_md": false 00:07:14.191 }, 00:07:14.191 "memory_domains": [ 00:07:14.191 { 00:07:14.191 "dma_device_id": "system", 00:07:14.191 "dma_device_type": 1 00:07:14.191 } 00:07:14.191 ], 00:07:14.191 "driver_specific": { 00:07:14.191 "nvme": [ 00:07:14.191 { 00:07:14.191 "trid": { 00:07:14.191 "trtype": "TCP", 00:07:14.191 "adrfam": "IPv4", 00:07:14.191 "traddr": "10.0.0.3", 00:07:14.191 "trsvcid": "4420", 00:07:14.191 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.191 }, 00:07:14.191 "ctrlr_data": { 00:07:14.191 "cntlid": 1, 00:07:14.191 "vendor_id": "0x8086", 00:07:14.191 "model_number": "SPDK bdev Controller", 00:07:14.191 "serial_number": "SPDK0", 00:07:14.191 "firmware_revision": "25.01", 00:07:14.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.191 "oacs": { 00:07:14.191 "security": 0, 00:07:14.191 "format": 0, 00:07:14.191 "firmware": 0, 00:07:14.191 "ns_manage": 0 00:07:14.191 }, 00:07:14.191 "multi_ctrlr": true, 00:07:14.191 "ana_reporting": false 00:07:14.191 }, 00:07:14.191 "vs": { 00:07:14.191 "nvme_version": "1.3" 00:07:14.191 }, 00:07:14.191 "ns_data": { 00:07:14.191 "id": 1, 00:07:14.191 "can_share": true 00:07:14.191 } 00:07:14.191 } 00:07:14.191 ], 00:07:14.191 "mp_policy": "active_passive" 00:07:14.191 } 00:07:14.191 } 00:07:14.191 ] 00:07:14.191 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63178 00:07:14.191 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.191 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.450 Running I/O for 10 seconds... 00:07:15.388 Latency(us) 00:07:15.388 [2024-12-06T12:15:02.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.388 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:15.388 [2024-12-06T12:15:02.046Z] =================================================================================================================== 00:07:15.388 [2024-12-06T12:15:02.046Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:15.388 00:07:16.327 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:16.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.327 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:16.327 [2024-12-06T12:15:02.985Z] =================================================================================================================== 00:07:16.327 [2024-12-06T12:15:02.986Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:16.328 00:07:16.586 true 00:07:16.586 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:16.586 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.845 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:17.104 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:17.104 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63178 00:07:17.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.363 Nvme0n1 : 3.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:17.363 [2024-12-06T12:15:04.021Z] =================================================================================================================== 00:07:17.363 [2024-12-06T12:15:04.021Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:17.363 00:07:18.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.300 Nvme0n1 : 4.00 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:07:18.300 [2024-12-06T12:15:04.958Z] =================================================================================================================== 00:07:18.300 [2024-12-06T12:15:04.958Z] Total : 6381.75 24.93 0.00 0.00 0.00 0.00 0.00 00:07:18.300 00:07:19.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.687 Nvme0n1 : 5.00 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:07:19.687 [2024-12-06T12:15:06.345Z] =================================================================================================================== 00:07:19.687 [2024-12-06T12:15:06.345Z] Total : 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:07:19.687 00:07:20.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.622 Nvme0n1 : 6.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:20.622 [2024-12-06T12:15:07.280Z] =================================================================================================================== 00:07:20.622 [2024-12-06T12:15:07.280Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:20.622 00:07:21.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.559 Nvme0n1 : 7.00 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:07:21.559 [2024-12-06T12:15:08.217Z] =================================================================================================================== 00:07:21.559 [2024-12-06T12:15:08.217Z] Total : 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:07:21.559 00:07:22.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.497 Nvme0n1 : 8.00 6365.88 24.87 0.00 0.00 0.00 0.00 0.00 00:07:22.497 [2024-12-06T12:15:09.155Z] =================================================================================================================== 00:07:22.497 [2024-12-06T12:15:09.155Z] Total : 6365.88 24.87 0.00 0.00 0.00 0.00 0.00 00:07:22.497 00:07:23.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.434 Nvme0n1 : 9.00 6355.78 24.83 0.00 0.00 0.00 0.00 0.00 00:07:23.434 [2024-12-06T12:15:10.092Z] =================================================================================================================== 00:07:23.434 [2024-12-06T12:15:10.092Z] Total : 6355.78 24.83 0.00 0.00 0.00 0.00 0.00 00:07:23.434 00:07:24.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.372 Nvme0n1 : 10.00 6240.90 24.38 0.00 0.00 0.00 0.00 0.00 00:07:24.372 [2024-12-06T12:15:11.030Z] =================================================================================================================== 00:07:24.372 [2024-12-06T12:15:11.030Z] Total : 6240.90 24.38 0.00 0.00 0.00 0.00 0.00 00:07:24.372 00:07:24.372 00:07:24.372 Latency(us) 00:07:24.372 [2024-12-06T12:15:11.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.372 Nvme0n1 : 10.02 6242.29 24.38 0.00 0.00 20501.17 13047.62 202089.19 00:07:24.372 [2024-12-06T12:15:11.030Z] =================================================================================================================== 00:07:24.372 [2024-12-06T12:15:11.030Z] Total : 6242.29 24.38 0.00 0.00 20501.17 13047.62 202089.19 00:07:24.372 { 00:07:24.372 "results": [ 00:07:24.372 { 00:07:24.372 "job": "Nvme0n1", 00:07:24.372 "core_mask": "0x2", 00:07:24.372 "workload": "randwrite", 00:07:24.372 "status": "finished", 00:07:24.372 "queue_depth": 128, 00:07:24.372 "io_size": 4096, 00:07:24.372 "runtime": 10.018276, 00:07:24.372 "iops": 6242.291587893965, 00:07:24.372 "mibps": 24.3839515152108, 00:07:24.372 "io_failed": 0, 00:07:24.372 "io_timeout": 0, 00:07:24.372 "avg_latency_us": 20501.170846873196, 00:07:24.372 "min_latency_us": 13047.621818181819, 00:07:24.372 "max_latency_us": 202089.19272727272 00:07:24.372 } 00:07:24.372 ], 00:07:24.372 "core_count": 1 00:07:24.372 } 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63173 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63173 ']' 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63173 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63173 00:07:24.372 killing process with pid 63173 00:07:24.372 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.372 00:07:24.372 Latency(us) 00:07:24.372 [2024-12-06T12:15:11.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.372 [2024-12-06T12:15:11.030Z] =================================================================================================================== 00:07:24.372 [2024-12-06T12:15:11.030Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63173' 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63173 00:07:24.372 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63173 00:07:24.631 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:24.889 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.146 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:25.146 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62827 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62827 00:07:25.404 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62827 Killed "${NVMF_APP[@]}" "$@" 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63316 00:07:25.404 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63316 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63316 ']' 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.404 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 [2024-12-06 12:15:12.044536] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:25.404 [2024-12-06 12:15:12.044807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.662 [2024-12-06 12:15:12.181026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.662 [2024-12-06 12:15:12.209242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.662 [2024-12-06 12:15:12.209526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.662 [2024-12-06 12:15:12.209656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.662 [2024-12-06 12:15:12.209775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.662 [2024-12-06 12:15:12.209813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.662 [2024-12-06 12:15:12.210165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.662 [2024-12-06 12:15:12.238349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.662 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.662 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:25.662 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.662 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.662 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.921 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.921 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.921 [2024-12-06 12:15:12.531125] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:25.921 [2024-12-06 12:15:12.531629] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:25.921 [2024-12-06 12:15:12.532023] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.179 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.437 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b2d97d2-4f32-4950-9b77-7b4832d7577b -t 2000 00:07:26.437 [ 00:07:26.437 { 00:07:26.437 "name": "8b2d97d2-4f32-4950-9b77-7b4832d7577b", 00:07:26.437 "aliases": [ 00:07:26.437 "lvs/lvol" 00:07:26.437 ], 00:07:26.437 "product_name": "Logical Volume", 00:07:26.437 "block_size": 4096, 00:07:26.437 "num_blocks": 38912, 00:07:26.437 "uuid": "8b2d97d2-4f32-4950-9b77-7b4832d7577b", 00:07:26.437 "assigned_rate_limits": { 00:07:26.437 "rw_ios_per_sec": 0, 00:07:26.437 "rw_mbytes_per_sec": 0, 00:07:26.437 "r_mbytes_per_sec": 0, 00:07:26.437 "w_mbytes_per_sec": 0 00:07:26.437 }, 00:07:26.437 "claimed": false, 00:07:26.437 "zoned": false, 00:07:26.437 "supported_io_types": { 00:07:26.437 "read": true, 00:07:26.437 "write": true, 00:07:26.437 "unmap": true, 00:07:26.437 "flush": false, 00:07:26.437 "reset": true, 00:07:26.437 "nvme_admin": false, 00:07:26.437 "nvme_io": false, 00:07:26.437 "nvme_io_md": false, 00:07:26.437 "write_zeroes": true, 00:07:26.437 "zcopy": false, 00:07:26.437 "get_zone_info": false, 00:07:26.437 "zone_management": false, 00:07:26.437 "zone_append": false, 00:07:26.437 "compare": false, 00:07:26.437 "compare_and_write": false, 00:07:26.437 "abort": false, 00:07:26.437 "seek_hole": true, 00:07:26.437 "seek_data": true, 00:07:26.437 "copy": false, 00:07:26.437 "nvme_iov_md": false 00:07:26.437 }, 00:07:26.437 "driver_specific": { 00:07:26.437 "lvol": { 00:07:26.437 "lvol_store_uuid": "dc77546a-f41b-4e25-8f4c-87facdac18f4", 00:07:26.437 "base_bdev": "aio_bdev", 00:07:26.437 "thin_provision": false, 00:07:26.437 "num_allocated_clusters": 38, 00:07:26.437 "snapshot": false, 00:07:26.437 "clone": false, 00:07:26.437 "esnap_clone": false 00:07:26.437 } 00:07:26.437 } 00:07:26.437 } 00:07:26.437 ] 00:07:26.437 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:26.437 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:26.437 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:26.695 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:26.695 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:26.695 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:26.953 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:26.953 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.211 [2024-12-06 12:15:13.713148] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.211 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:27.469 request: 00:07:27.469 { 00:07:27.469 "uuid": "dc77546a-f41b-4e25-8f4c-87facdac18f4", 00:07:27.469 "method": "bdev_lvol_get_lvstores", 00:07:27.469 "req_id": 1 00:07:27.469 } 00:07:27.469 Got JSON-RPC error response 00:07:27.469 response: 00:07:27.469 { 00:07:27.469 "code": -19, 00:07:27.469 "message": "No such device" 00:07:27.469 } 00:07:27.469 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:27.469 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.469 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.469 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.469 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.726 aio_bdev 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.726 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.984 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b2d97d2-4f32-4950-9b77-7b4832d7577b -t 2000 00:07:27.984 [ 00:07:27.984 { 00:07:27.984 "name": "8b2d97d2-4f32-4950-9b77-7b4832d7577b", 00:07:27.984 "aliases": [ 00:07:27.984 "lvs/lvol" 00:07:27.984 ], 00:07:27.984 "product_name": "Logical Volume", 00:07:27.984 "block_size": 4096, 00:07:27.984 "num_blocks": 38912, 00:07:27.984 "uuid": "8b2d97d2-4f32-4950-9b77-7b4832d7577b", 00:07:27.984 "assigned_rate_limits": { 00:07:27.984 "rw_ios_per_sec": 0, 00:07:27.984 "rw_mbytes_per_sec": 0, 00:07:27.984 "r_mbytes_per_sec": 0, 00:07:27.984 "w_mbytes_per_sec": 0 00:07:27.984 }, 00:07:27.984 "claimed": false, 00:07:27.984 "zoned": false, 00:07:27.984 "supported_io_types": { 00:07:27.984 "read": true, 00:07:27.984 "write": true, 00:07:27.984 "unmap": true, 00:07:27.984 "flush": false, 00:07:27.984 "reset": true, 00:07:27.984 "nvme_admin": false, 00:07:27.984 "nvme_io": false, 00:07:27.984 "nvme_io_md": false, 00:07:27.984 "write_zeroes": true, 00:07:27.984 "zcopy": false, 00:07:27.984 "get_zone_info": false, 00:07:27.984 "zone_management": false, 00:07:27.984 "zone_append": false, 00:07:27.984 "compare": false, 00:07:27.984 "compare_and_write": false, 00:07:27.984 "abort": false, 00:07:27.984 "seek_hole": true, 00:07:27.984 "seek_data": true, 00:07:27.984 "copy": false, 00:07:27.984 "nvme_iov_md": false 00:07:27.984 }, 00:07:27.984 "driver_specific": { 00:07:27.984 "lvol": { 00:07:27.984 "lvol_store_uuid": "dc77546a-f41b-4e25-8f4c-87facdac18f4", 00:07:27.984 "base_bdev": "aio_bdev", 00:07:27.984 "thin_provision": false, 00:07:27.984 "num_allocated_clusters": 38, 00:07:27.984 "snapshot": false, 00:07:27.984 "clone": false, 00:07:27.984 "esnap_clone": false 00:07:27.984 } 00:07:27.984 } 00:07:27.984 } 00:07:27.984 ] 00:07:27.984 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:27.984 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:27.984 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:28.242 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:28.242 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:28.242 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:28.500 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:28.500 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8b2d97d2-4f32-4950-9b77-7b4832d7577b 00:07:28.758 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc77546a-f41b-4e25-8f4c-87facdac18f4 00:07:29.016 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:29.272 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:29.529 00:07:29.529 real 0m18.756s 00:07:29.529 user 0m37.736s 00:07:29.529 sys 0m9.384s 00:07:29.529 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.529 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:29.529 ************************************ 00:07:29.529 END TEST lvs_grow_dirty 00:07:29.529 ************************************ 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:29.786 nvmf_trace.0 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.786 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.350 rmmod nvme_tcp 00:07:30.350 rmmod nvme_fabrics 00:07:30.350 rmmod nvme_keyring 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63316 ']' 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63316 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63316 ']' 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63316 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63316 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.350 killing process with pid 63316 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63316' 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63316 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63316 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.350 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.350 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.350 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:30.350 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:30.608 00:07:30.608 real 0m38.826s 00:07:30.608 user 1m0.285s 00:07:30.608 sys 0m12.740s 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.608 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.608 ************************************ 00:07:30.608 END TEST nvmf_lvs_grow 00:07:30.608 ************************************ 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.867 ************************************ 00:07:30.867 START TEST nvmf_bdev_io_wait 00:07:30.867 ************************************ 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.867 * Looking for test storage... 00:07:30.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.867 --rc genhtml_branch_coverage=1 00:07:30.867 --rc genhtml_function_coverage=1 00:07:30.867 --rc genhtml_legend=1 00:07:30.867 --rc geninfo_all_blocks=1 00:07:30.867 --rc geninfo_unexecuted_blocks=1 00:07:30.867 00:07:30.867 ' 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.867 --rc genhtml_branch_coverage=1 00:07:30.867 --rc genhtml_function_coverage=1 00:07:30.867 --rc genhtml_legend=1 00:07:30.867 --rc geninfo_all_blocks=1 00:07:30.867 --rc geninfo_unexecuted_blocks=1 00:07:30.867 00:07:30.867 ' 00:07:30.867 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.868 --rc genhtml_branch_coverage=1 00:07:30.868 --rc genhtml_function_coverage=1 00:07:30.868 --rc genhtml_legend=1 00:07:30.868 --rc geninfo_all_blocks=1 00:07:30.868 --rc geninfo_unexecuted_blocks=1 00:07:30.868 00:07:30.868 ' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.868 --rc genhtml_branch_coverage=1 00:07:30.868 --rc genhtml_function_coverage=1 00:07:30.868 --rc genhtml_legend=1 00:07:30.868 --rc geninfo_all_blocks=1 00:07:30.868 --rc geninfo_unexecuted_blocks=1 00:07:30.868 00:07:30.868 ' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.868 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:30.868 Cannot find device "nvmf_init_br" 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:30.868 Cannot find device "nvmf_init_br2" 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:30.868 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:31.126 Cannot find device "nvmf_tgt_br" 00:07:31.126 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:31.126 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.126 Cannot find device "nvmf_tgt_br2" 00:07:31.126 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:31.126 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:31.126 Cannot find device "nvmf_init_br" 00:07:31.126 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:31.126 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:31.127 Cannot find device "nvmf_init_br2" 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:31.127 Cannot find device "nvmf_tgt_br" 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:31.127 Cannot find device "nvmf_tgt_br2" 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:31.127 Cannot find device "nvmf_br" 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:31.127 Cannot find device "nvmf_init_if" 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:31.127 Cannot find device "nvmf_init_if2" 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.127 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:31.385 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:31.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:31.386 00:07:31.386 --- 10.0.0.3 ping statistics --- 00:07:31.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.386 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:31.386 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:31.386 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:07:31.386 00:07:31.386 --- 10.0.0.4 ping statistics --- 00:07:31.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.386 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:31.386 00:07:31.386 --- 10.0.0.1 ping statistics --- 00:07:31.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.386 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:31.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:31.386 00:07:31.386 --- 10.0.0.2 ping statistics --- 00:07:31.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.386 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63680 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63680 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63680 ']' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:31.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.386 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.386 [2024-12-06 12:15:17.947142] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:31.386 [2024-12-06 12:15:17.947269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.645 [2024-12-06 12:15:18.087379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.645 [2024-12-06 12:15:18.117343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.645 [2024-12-06 12:15:18.117406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.645 [2024-12-06 12:15:18.117416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.645 [2024-12-06 12:15:18.117423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.645 [2024-12-06 12:15:18.117429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.645 [2024-12-06 12:15:18.118196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.645 [2024-12-06 12:15:18.118530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.645 [2024-12-06 12:15:18.119005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.645 [2024-12-06 12:15:18.119048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 [2024-12-06 12:15:18.265976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 [2024-12-06 12:15:18.280806] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.905 Malloc0 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.905 [2024-12-06 12:15:18.333229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63709 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63711 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.905 { 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme$subsystem", 00:07:31.905 "trtype": "$TEST_TRANSPORT", 00:07:31.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "$NVMF_PORT", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.905 "hdgst": ${hdgst:-false}, 00:07:31.905 "ddgst": ${ddgst:-false} 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 } 00:07:31.905 EOF 00:07:31.905 )") 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.905 { 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme$subsystem", 00:07:31.905 "trtype": "$TEST_TRANSPORT", 00:07:31.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "$NVMF_PORT", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.905 "hdgst": ${hdgst:-false}, 00:07:31.905 "ddgst": ${ddgst:-false} 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 } 00:07:31.905 EOF 00:07:31.905 )") 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63713 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63718 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.905 { 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme$subsystem", 00:07:31.905 "trtype": "$TEST_TRANSPORT", 00:07:31.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "$NVMF_PORT", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.905 "hdgst": ${hdgst:-false}, 00:07:31.905 "ddgst": ${ddgst:-false} 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 } 00:07:31.905 EOF 00:07:31.905 )") 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.905 { 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme$subsystem", 00:07:31.905 "trtype": "$TEST_TRANSPORT", 00:07:31.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "$NVMF_PORT", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.905 "hdgst": ${hdgst:-false}, 00:07:31.905 "ddgst": ${ddgst:-false} 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 } 00:07:31.905 EOF 00:07:31.905 )") 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme1", 00:07:31.905 "trtype": "tcp", 00:07:31.905 "traddr": "10.0.0.3", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "4420", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.905 "hdgst": false, 00:07:31.905 "ddgst": false 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 }' 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme1", 00:07:31.905 "trtype": "tcp", 00:07:31.905 "traddr": "10.0.0.3", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "4420", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.905 "hdgst": false, 00:07:31.905 "ddgst": false 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 }' 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.905 "params": { 00:07:31.905 "name": "Nvme1", 00:07:31.905 "trtype": "tcp", 00:07:31.905 "traddr": "10.0.0.3", 00:07:31.905 "adrfam": "ipv4", 00:07:31.905 "trsvcid": "4420", 00:07:31.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.905 "hdgst": false, 00:07:31.905 "ddgst": false 00:07:31.905 }, 00:07:31.905 "method": "bdev_nvme_attach_controller" 00:07:31.905 }' 00:07:31.905 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.906 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.906 "params": { 00:07:31.906 "name": "Nvme1", 00:07:31.906 "trtype": "tcp", 00:07:31.906 "traddr": "10.0.0.3", 00:07:31.906 "adrfam": "ipv4", 00:07:31.906 "trsvcid": "4420", 00:07:31.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.906 "hdgst": false, 00:07:31.906 "ddgst": false 00:07:31.906 }, 00:07:31.906 "method": "bdev_nvme_attach_controller" 00:07:31.906 }' 00:07:31.906 [2024-12-06 12:15:18.403634] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:31.906 [2024-12-06 12:15:18.403727] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:31.906 [2024-12-06 12:15:18.406955] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:31.906 [2024-12-06 12:15:18.407195] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:31.906 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63709 00:07:31.906 [2024-12-06 12:15:18.416990] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:31.906 [2024-12-06 12:15:18.417065] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:31.906 [2024-12-06 12:15:18.417414] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:31.906 [2024-12-06 12:15:18.417478] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:32.165 [2024-12-06 12:15:18.598236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.165 [2024-12-06 12:15:18.628918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:32.165 [2024-12-06 12:15:18.642208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.165 [2024-12-06 12:15:18.642847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.165 [2024-12-06 12:15:18.673165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:32.165 [2024-12-06 12:15:18.680987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.165 [2024-12-06 12:15:18.687020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.165 [2024-12-06 12:15:18.712067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:32.165 [2024-12-06 12:15:18.726300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.165 [2024-12-06 12:15:18.726377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.165 Running I/O for 1 seconds... 00:07:32.165 [2024-12-06 12:15:18.757732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.165 [2024-12-06 12:15:18.771695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.165 Running I/O for 1 seconds... 00:07:32.424 Running I/O for 1 seconds... 00:07:32.424 Running I/O for 1 seconds... 00:07:33.360 167400.00 IOPS, 653.91 MiB/s 00:07:33.360 Latency(us) 00:07:33.360 [2024-12-06T12:15:20.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.360 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:33.360 Nvme1n1 : 1.00 167074.02 652.63 0.00 0.00 762.21 342.57 1921.40 00:07:33.360 [2024-12-06T12:15:20.018Z] =================================================================================================================== 00:07:33.360 [2024-12-06T12:15:20.018Z] Total : 167074.02 652.63 0.00 0.00 762.21 342.57 1921.40 00:07:33.360 12001.00 IOPS, 46.88 MiB/s 00:07:33.360 Latency(us) 00:07:33.360 [2024-12-06T12:15:20.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.360 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:33.360 Nvme1n1 : 1.01 12060.22 47.11 0.00 0.00 10576.29 6196.13 18230.92 00:07:33.360 [2024-12-06T12:15:20.018Z] =================================================================================================================== 00:07:33.360 [2024-12-06T12:15:20.018Z] Total : 12060.22 47.11 0.00 0.00 10576.29 6196.13 18230.92 00:07:33.360 7815.00 IOPS, 30.53 MiB/s 00:07:33.360 Latency(us) 00:07:33.360 [2024-12-06T12:15:20.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.360 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:33.360 Nvme1n1 : 1.01 7858.91 30.70 0.00 0.00 16192.19 9472.93 25261.15 00:07:33.360 [2024-12-06T12:15:20.018Z] =================================================================================================================== 00:07:33.360 [2024-12-06T12:15:20.018Z] Total : 7858.91 30.70 0.00 0.00 16192.19 9472.93 25261.15 00:07:33.360 7855.00 IOPS, 30.68 MiB/s 00:07:33.360 Latency(us) 00:07:33.360 [2024-12-06T12:15:20.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.360 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:33.360 Nvme1n1 : 1.01 7927.18 30.97 0.00 0.00 16071.73 6642.97 29074.15 00:07:33.360 [2024-12-06T12:15:20.018Z] =================================================================================================================== 00:07:33.360 [2024-12-06T12:15:20.018Z] Total : 7927.18 30.97 0.00 0.00 16071.73 6642.97 29074.15 00:07:33.360 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63711 00:07:33.360 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63713 00:07:33.360 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63718 00:07:33.360 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.360 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.360 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.619 rmmod nvme_tcp 00:07:33.619 rmmod nvme_fabrics 00:07:33.619 rmmod nvme_keyring 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63680 ']' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63680 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63680 ']' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63680 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63680 00:07:33.619 killing process with pid 63680 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63680' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63680 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63680 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:33.619 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:33.878 00:07:33.878 real 0m3.212s 00:07:33.878 user 0m12.657s 00:07:33.878 sys 0m2.074s 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.878 ************************************ 00:07:33.878 END TEST nvmf_bdev_io_wait 00:07:33.878 ************************************ 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.878 12:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.139 ************************************ 00:07:34.139 START TEST nvmf_queue_depth 00:07:34.139 ************************************ 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.139 * Looking for test storage... 00:07:34.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.139 --rc genhtml_branch_coverage=1 00:07:34.139 --rc genhtml_function_coverage=1 00:07:34.139 --rc genhtml_legend=1 00:07:34.139 --rc geninfo_all_blocks=1 00:07:34.139 --rc geninfo_unexecuted_blocks=1 00:07:34.139 00:07:34.139 ' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.139 --rc genhtml_branch_coverage=1 00:07:34.139 --rc genhtml_function_coverage=1 00:07:34.139 --rc genhtml_legend=1 00:07:34.139 --rc geninfo_all_blocks=1 00:07:34.139 --rc geninfo_unexecuted_blocks=1 00:07:34.139 00:07:34.139 ' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.139 --rc genhtml_branch_coverage=1 00:07:34.139 --rc genhtml_function_coverage=1 00:07:34.139 --rc genhtml_legend=1 00:07:34.139 --rc geninfo_all_blocks=1 00:07:34.139 --rc geninfo_unexecuted_blocks=1 00:07:34.139 00:07:34.139 ' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.139 --rc genhtml_branch_coverage=1 00:07:34.139 --rc genhtml_function_coverage=1 00:07:34.139 --rc genhtml_legend=1 00:07:34.139 --rc geninfo_all_blocks=1 00:07:34.139 --rc geninfo_unexecuted_blocks=1 00:07:34.139 00:07:34.139 ' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.139 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:34.140 Cannot find device "nvmf_init_br" 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:34.140 Cannot find device "nvmf_init_br2" 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:34.140 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:34.400 Cannot find device "nvmf_tgt_br" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:34.401 Cannot find device "nvmf_tgt_br2" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:34.401 Cannot find device "nvmf_init_br" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:34.401 Cannot find device "nvmf_init_br2" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:34.401 Cannot find device "nvmf_tgt_br" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:34.401 Cannot find device "nvmf_tgt_br2" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:34.401 Cannot find device "nvmf_br" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:34.401 Cannot find device "nvmf_init_if" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:34.401 Cannot find device "nvmf_init_if2" 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:34.401 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.401 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:34.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:34.661 00:07:34.661 --- 10.0.0.3 ping statistics --- 00:07:34.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.661 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:34.661 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:34.661 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:07:34.661 00:07:34.661 --- 10.0.0.4 ping statistics --- 00:07:34.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.661 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:34.661 00:07:34.661 --- 10.0.0.1 ping statistics --- 00:07:34.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.661 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:34.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:07:34.661 00:07:34.661 --- 10.0.0.2 ping statistics --- 00:07:34.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.661 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=63970 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 63970 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63970 ']' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.661 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.661 [2024-12-06 12:15:21.205016] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:34.661 [2024-12-06 12:15:21.205102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.921 [2024-12-06 12:15:21.353976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.921 [2024-12-06 12:15:21.383920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.921 [2024-12-06 12:15:21.383974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.921 [2024-12-06 12:15:21.383984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.921 [2024-12-06 12:15:21.383991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.921 [2024-12-06 12:15:21.383997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.921 [2024-12-06 12:15:21.384308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.921 [2024-12-06 12:15:21.412606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.921 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.922 [2024-12-06 12:15:21.510508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.922 Malloc0 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.922 [2024-12-06 12:15:21.552735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=63995 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 63995 /var/tmp/bdevperf.sock 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63995 ']' 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.922 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.181 [2024-12-06 12:15:21.614536] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:35.181 [2024-12-06 12:15:21.614629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63995 ] 00:07:35.181 [2024-12-06 12:15:21.767276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.181 [2024-12-06 12:15:21.805955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.440 [2024-12-06 12:15:21.839651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.440 NVMe0n1 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.440 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:35.440 Running I/O for 10 seconds... 00:07:37.756 8192.00 IOPS, 32.00 MiB/s [2024-12-06T12:15:25.350Z] 8911.00 IOPS, 34.81 MiB/s [2024-12-06T12:15:26.286Z] 9225.33 IOPS, 36.04 MiB/s [2024-12-06T12:15:27.223Z] 9357.50 IOPS, 36.55 MiB/s [2024-12-06T12:15:28.160Z] 9496.00 IOPS, 37.09 MiB/s [2024-12-06T12:15:29.098Z] 9573.00 IOPS, 37.39 MiB/s [2024-12-06T12:15:30.476Z] 9570.14 IOPS, 37.38 MiB/s [2024-12-06T12:15:31.414Z] 9623.25 IOPS, 37.59 MiB/s [2024-12-06T12:15:32.351Z] 9646.89 IOPS, 37.68 MiB/s [2024-12-06T12:15:32.351Z] 9654.50 IOPS, 37.71 MiB/s 00:07:45.693 Latency(us) 00:07:45.693 [2024-12-06T12:15:32.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.693 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:45.693 Verification LBA range: start 0x0 length 0x4000 00:07:45.693 NVMe0n1 : 10.05 9704.59 37.91 0.00 0.00 105084.04 7685.59 76260.07 00:07:45.693 [2024-12-06T12:15:32.351Z] =================================================================================================================== 00:07:45.693 [2024-12-06T12:15:32.351Z] Total : 9704.59 37.91 0.00 0.00 105084.04 7685.59 76260.07 00:07:45.693 { 00:07:45.693 "results": [ 00:07:45.693 { 00:07:45.693 "job": "NVMe0n1", 00:07:45.693 "core_mask": "0x1", 00:07:45.693 "workload": "verify", 00:07:45.693 "status": "finished", 00:07:45.693 "verify_range": { 00:07:45.693 "start": 0, 00:07:45.693 "length": 16384 00:07:45.693 }, 00:07:45.693 "queue_depth": 1024, 00:07:45.693 "io_size": 4096, 00:07:45.693 "runtime": 10.053907, 00:07:45.693 "iops": 9704.5854909937, 00:07:45.693 "mibps": 37.90853707419414, 00:07:45.693 "io_failed": 0, 00:07:45.693 "io_timeout": 0, 00:07:45.693 "avg_latency_us": 105084.03940730057, 00:07:45.693 "min_latency_us": 7685.585454545455, 00:07:45.693 "max_latency_us": 76260.07272727272 00:07:45.693 } 00:07:45.693 ], 00:07:45.693 "core_count": 1 00:07:45.693 } 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 63995 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63995 ']' 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63995 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63995 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.693 killing process with pid 63995 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63995' 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63995 00:07:45.693 Received shutdown signal, test time was about 10.000000 seconds 00:07:45.693 00:07:45.693 Latency(us) 00:07:45.693 [2024-12-06T12:15:32.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.693 [2024-12-06T12:15:32.351Z] =================================================================================================================== 00:07:45.693 [2024-12-06T12:15:32.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63995 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:45.693 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:45.952 rmmod nvme_tcp 00:07:45.952 rmmod nvme_fabrics 00:07:45.952 rmmod nvme_keyring 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 63970 ']' 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 63970 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63970 ']' 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63970 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63970 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:45.952 killing process with pid 63970 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63970' 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63970 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63970 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:45.952 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:46.212 00:07:46.212 real 0m12.317s 00:07:46.212 user 0m21.090s 00:07:46.212 sys 0m2.060s 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.212 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.212 ************************************ 00:07:46.212 END TEST nvmf_queue_depth 00:07:46.212 ************************************ 00:07:46.473 12:15:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:46.473 12:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.473 12:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.473 12:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.473 ************************************ 00:07:46.473 START TEST nvmf_target_multipath 00:07:46.473 ************************************ 00:07:46.473 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:46.473 * Looking for test storage... 00:07:46.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.473 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.473 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.474 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:46.474 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.734 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:46.735 Cannot find device "nvmf_init_br" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:46.735 Cannot find device "nvmf_init_br2" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:46.735 Cannot find device "nvmf_tgt_br" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.735 Cannot find device "nvmf_tgt_br2" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:46.735 Cannot find device "nvmf_init_br" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:46.735 Cannot find device "nvmf_init_br2" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:46.735 Cannot find device "nvmf_tgt_br" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:46.735 Cannot find device "nvmf_tgt_br2" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:46.735 Cannot find device "nvmf_br" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:46.735 Cannot find device "nvmf_init_if" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:46.735 Cannot find device "nvmf_init_if2" 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.735 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.995 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:46.995 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:46.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:07:46.996 00:07:46.996 --- 10.0.0.3 ping statistics --- 00:07:46.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.996 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:46.996 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:46.996 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:07:46.996 00:07:46.996 --- 10.0.0.4 ping statistics --- 00:07:46.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.996 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:46.996 00:07:46.996 --- 10.0.0.1 ping statistics --- 00:07:46.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.996 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:46.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:46.996 00:07:46.996 --- 10.0.0.2 ping statistics --- 00:07:46.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.996 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64359 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64359 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64359 ']' 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.996 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:46.996 [2024-12-06 12:15:33.582636] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:46.996 [2024-12-06 12:15:33.582746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.256 [2024-12-06 12:15:33.724389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.256 [2024-12-06 12:15:33.755083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.256 [2024-12-06 12:15:33.755195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.256 [2024-12-06 12:15:33.755211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.256 [2024-12-06 12:15:33.755220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.256 [2024-12-06 12:15:33.755227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.256 [2024-12-06 12:15:33.756005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.256 [2024-12-06 12:15:33.756139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.256 [2024-12-06 12:15:33.758205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.256 [2024-12-06 12:15:33.758279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.256 [2024-12-06 12:15:33.789207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.194 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:48.453 [2024-12-06 12:15:34.887183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.453 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:07:48.711 Malloc0 00:07:48.711 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:07:48.969 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:49.229 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:49.487 [2024-12-06 12:15:36.029405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:49.487 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:07:49.746 [2024-12-06 12:15:36.253552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:07:49.746 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:07:50.005 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:07:50.005 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.005 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:07:50.005 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.005 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:50.005 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64454 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:51.933 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:07:52.211 [global] 00:07:52.211 thread=1 00:07:52.211 invalidate=1 00:07:52.211 rw=randrw 00:07:52.211 time_based=1 00:07:52.211 runtime=6 00:07:52.211 ioengine=libaio 00:07:52.211 direct=1 00:07:52.211 bs=4096 00:07:52.211 iodepth=128 00:07:52.211 norandommap=0 00:07:52.211 numjobs=1 00:07:52.211 00:07:52.211 verify_dump=1 00:07:52.211 verify_backlog=512 00:07:52.211 verify_state_save=0 00:07:52.211 do_verify=1 00:07:52.211 verify=crc32c-intel 00:07:52.211 [job0] 00:07:52.211 filename=/dev/nvme0n1 00:07:52.211 Could not set queue depth (nvme0n1) 00:07:52.211 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:52.211 fio-3.35 00:07:52.211 Starting 1 thread 00:07:53.151 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:53.409 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:53.667 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:53.924 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:54.182 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64454 00:07:58.387 00:07:58.387 job0: (groupid=0, jobs=1): err= 0: pid=64480: Fri Dec 6 12:15:44 2024 00:07:58.387 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(255MiB/6006msec) 00:07:58.387 slat (usec): min=3, max=6105, avg=53.36, stdev=210.40 00:07:58.387 clat (usec): min=1529, max=14623, avg=7984.41, stdev=1382.83 00:07:58.387 lat (usec): min=1539, max=14658, avg=8037.77, stdev=1387.38 00:07:58.387 clat percentiles (usec): 00:07:58.387 | 1.00th=[ 4113], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7242], 00:07:58.387 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 8029], 00:07:58.387 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11076], 00:07:58.387 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13173], 99.95th=[13435], 00:07:58.387 | 99.99th=[13829] 00:07:58.387 bw ( KiB/s): min= 7744, max=28640, per=52.74%, avg=22886.55, stdev=6392.50, samples=11 00:07:58.387 iops : min= 1936, max= 7160, avg=5721.64, stdev=1598.13, samples=11 00:07:58.387 write: IOPS=6356, BW=24.8MiB/s (26.0MB/s)(135MiB/5451msec); 0 zone resets 00:07:58.387 slat (usec): min=15, max=2851, avg=63.13, stdev=151.70 00:07:58.387 clat (usec): min=1355, max=13991, avg=6976.78, stdev=1255.51 00:07:58.387 lat (usec): min=1417, max=14021, avg=7039.91, stdev=1260.00 00:07:58.387 clat percentiles (usec): 00:07:58.387 | 1.00th=[ 3195], 5.00th=[ 4047], 10.00th=[ 5407], 20.00th=[ 6456], 00:07:58.387 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7308], 00:07:58.387 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8356], 00:07:58.387 | 99.00th=[10683], 99.50th=[11076], 99.90th=[12125], 99.95th=[12387], 00:07:58.388 | 99.99th=[13566] 00:07:58.388 bw ( KiB/s): min= 8192, max=27888, per=90.00%, avg=22885.82, stdev=6240.11, samples=11 00:07:58.388 iops : min= 2048, max= 6972, avg=5721.45, stdev=1560.03, samples=11 00:07:58.388 lat (msec) : 2=0.02%, 4=2.16%, 10=92.15%, 20=5.67% 00:07:58.388 cpu : usr=5.94%, sys=22.03%, ctx=5751, majf=0, minf=90 00:07:58.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:07:58.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:58.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:58.388 issued rwts: total=65152,34650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:58.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:58.388 00:07:58.388 Run status group 0 (all jobs): 00:07:58.388 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=255MiB (267MB), run=6006-6006msec 00:07:58.388 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=135MiB (142MB), run=5451-5451msec 00:07:58.388 00:07:58.388 Disk stats (read/write): 00:07:58.388 nvme0n1: ios=64197/33987, merge=0/0, ticks=490043/222070, in_queue=712113, util=98.68% 00:07:58.388 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:07:58.647 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64561 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:58.905 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:07:58.905 [global] 00:07:58.905 thread=1 00:07:58.905 invalidate=1 00:07:58.905 rw=randrw 00:07:58.905 time_based=1 00:07:58.905 runtime=6 00:07:58.905 ioengine=libaio 00:07:58.905 direct=1 00:07:58.905 bs=4096 00:07:58.905 iodepth=128 00:07:58.905 norandommap=0 00:07:58.905 numjobs=1 00:07:58.905 00:07:58.905 verify_dump=1 00:07:58.905 verify_backlog=512 00:07:58.905 verify_state_save=0 00:07:58.905 do_verify=1 00:07:58.905 verify=crc32c-intel 00:07:58.905 [job0] 00:07:58.905 filename=/dev/nvme0n1 00:07:58.905 Could not set queue depth (nvme0n1) 00:07:59.164 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:59.164 fio-3.35 00:07:59.164 Starting 1 thread 00:08:00.100 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:00.360 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:00.619 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:00.878 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:01.137 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:01.137 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:01.137 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:01.138 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64561 00:08:05.332 00:08:05.332 job0: (groupid=0, jobs=1): err= 0: pid=64583: Fri Dec 6 12:15:51 2024 00:08:05.332 read: IOPS=12.3k, BW=48.1MiB/s (50.5MB/s)(289MiB/6005msec) 00:08:05.332 slat (usec): min=3, max=7187, avg=42.60, stdev=185.35 00:08:05.332 clat (usec): min=537, max=15224, avg=7264.40, stdev=1832.15 00:08:05.332 lat (usec): min=570, max=15252, avg=7307.00, stdev=1846.10 00:08:05.332 clat percentiles (usec): 00:08:05.332 | 1.00th=[ 2900], 5.00th=[ 3982], 10.00th=[ 4686], 20.00th=[ 5735], 00:08:05.332 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7767], 00:08:05.332 | 70.00th=[ 7963], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[10552], 00:08:05.332 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14222], 99.95th=[14484], 00:08:05.332 | 99.99th=[14877] 00:08:05.332 bw ( KiB/s): min=11168, max=39256, per=50.30%, avg=24795.33, stdev=7869.36, samples=12 00:08:05.332 iops : min= 2792, max= 9814, avg=6198.83, stdev=1967.34, samples=12 00:08:05.332 write: IOPS=6998, BW=27.3MiB/s (28.7MB/s)(145MiB/5319msec); 0 zone resets 00:08:05.332 slat (usec): min=12, max=1849, avg=50.26, stdev=123.20 00:08:05.332 clat (usec): min=476, max=14061, avg=6029.73, stdev=1733.97 00:08:05.332 lat (usec): min=504, max=14097, avg=6079.99, stdev=1747.26 00:08:05.332 clat percentiles (usec): 00:08:05.332 | 1.00th=[ 2442], 5.00th=[ 3097], 10.00th=[ 3490], 20.00th=[ 4146], 00:08:05.332 | 30.00th=[ 4883], 40.00th=[ 6063], 50.00th=[ 6587], 60.00th=[ 6915], 00:08:05.333 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8094], 00:08:05.333 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12125], 99.95th=[12780], 00:08:05.333 | 99.99th=[13829] 00:08:05.333 bw ( KiB/s): min=11520, max=39856, per=88.53%, avg=24783.33, stdev=7733.53, samples=12 00:08:05.333 iops : min= 2880, max= 9964, avg=6195.83, stdev=1933.38, samples=12 00:08:05.333 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:05.333 lat (msec) : 2=0.14%, 4=9.24%, 10=86.37%, 20=4.24% 00:08:05.333 cpu : usr=6.44%, sys=23.33%, ctx=6496, majf=0, minf=66 00:08:05.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:05.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:05.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:05.333 issued rwts: total=74004,37225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:05.333 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:05.333 00:08:05.333 Run status group 0 (all jobs): 00:08:05.333 READ: bw=48.1MiB/s (50.5MB/s), 48.1MiB/s-48.1MiB/s (50.5MB/s-50.5MB/s), io=289MiB (303MB), run=6005-6005msec 00:08:05.333 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=145MiB (152MB), run=5319-5319msec 00:08:05.333 00:08:05.333 Disk stats (read/write): 00:08:05.333 nvme0n1: ios=73284/36469, merge=0/0, ticks=505808/203831, in_queue=709639, util=98.55% 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:05.333 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.901 rmmod nvme_tcp 00:08:05.901 rmmod nvme_fabrics 00:08:05.901 rmmod nvme_keyring 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64359 ']' 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64359 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64359 ']' 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64359 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64359 00:08:05.901 killing process with pid 64359 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64359' 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64359 00:08:05.901 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64359 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.161 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:06.421 00:08:06.421 real 0m19.915s 00:08:06.421 user 1m13.999s 00:08:06.421 sys 0m10.119s 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 ************************************ 00:08:06.421 END TEST nvmf_target_multipath 00:08:06.421 ************************************ 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 ************************************ 00:08:06.421 START TEST nvmf_zcopy 00:08:06.421 ************************************ 00:08:06.421 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:06.421 * Looking for test storage... 00:08:06.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.422 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.422 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.422 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.422 --rc genhtml_branch_coverage=1 00:08:06.422 --rc genhtml_function_coverage=1 00:08:06.422 --rc genhtml_legend=1 00:08:06.422 --rc geninfo_all_blocks=1 00:08:06.422 --rc geninfo_unexecuted_blocks=1 00:08:06.422 00:08:06.422 ' 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.422 --rc genhtml_branch_coverage=1 00:08:06.422 --rc genhtml_function_coverage=1 00:08:06.422 --rc genhtml_legend=1 00:08:06.422 --rc geninfo_all_blocks=1 00:08:06.422 --rc geninfo_unexecuted_blocks=1 00:08:06.422 00:08:06.422 ' 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.422 --rc genhtml_branch_coverage=1 00:08:06.422 --rc genhtml_function_coverage=1 00:08:06.422 --rc genhtml_legend=1 00:08:06.422 --rc geninfo_all_blocks=1 00:08:06.422 --rc geninfo_unexecuted_blocks=1 00:08:06.422 00:08:06.422 ' 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.422 --rc genhtml_branch_coverage=1 00:08:06.422 --rc genhtml_function_coverage=1 00:08:06.422 --rc genhtml_legend=1 00:08:06.422 --rc geninfo_all_blocks=1 00:08:06.422 --rc geninfo_unexecuted_blocks=1 00:08:06.422 00:08:06.422 ' 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.422 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.681 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.682 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:06.682 Cannot find device "nvmf_init_br" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:06.682 Cannot find device "nvmf_init_br2" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:06.682 Cannot find device "nvmf_tgt_br" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.682 Cannot find device "nvmf_tgt_br2" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:06.682 Cannot find device "nvmf_init_br" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:06.682 Cannot find device "nvmf_init_br2" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:06.682 Cannot find device "nvmf_tgt_br" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:06.682 Cannot find device "nvmf_tgt_br2" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:06.682 Cannot find device "nvmf_br" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:06.682 Cannot find device "nvmf_init_if" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:06.682 Cannot find device "nvmf_init_if2" 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:06.682 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:06.683 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:06.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:08:06.942 00:08:06.942 --- 10.0.0.3 ping statistics --- 00:08:06.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.942 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:06.942 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:06.942 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:06.942 00:08:06.942 --- 10.0.0.4 ping statistics --- 00:08:06.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.942 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:06.942 00:08:06.942 --- 10.0.0.1 ping statistics --- 00:08:06.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.942 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:06.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:08:06.942 00:08:06.942 --- 10.0.0.2 ping statistics --- 00:08:06.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.942 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.942 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=64887 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 64887 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 64887 ']' 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.943 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.943 [2024-12-06 12:15:53.521800] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:06.943 [2024-12-06 12:15:53.521890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.203 [2024-12-06 12:15:53.660335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.203 [2024-12-06 12:15:53.687728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.203 [2024-12-06 12:15:53.687794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.203 [2024-12-06 12:15:53.687819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.203 [2024-12-06 12:15:53.687826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.203 [2024-12-06 12:15:53.687831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.203 [2024-12-06 12:15:53.688138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.203 [2024-12-06 12:15:53.714490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 [2024-12-06 12:15:53.836241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 [2024-12-06 12:15:53.852334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.203 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 malloc0 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.463 { 00:08:07.463 "params": { 00:08:07.463 "name": "Nvme$subsystem", 00:08:07.463 "trtype": "$TEST_TRANSPORT", 00:08:07.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.463 "adrfam": "ipv4", 00:08:07.463 "trsvcid": "$NVMF_PORT", 00:08:07.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.463 "hdgst": ${hdgst:-false}, 00:08:07.463 "ddgst": ${ddgst:-false} 00:08:07.463 }, 00:08:07.463 "method": "bdev_nvme_attach_controller" 00:08:07.463 } 00:08:07.463 EOF 00:08:07.463 )") 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:07.463 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.463 "params": { 00:08:07.463 "name": "Nvme1", 00:08:07.463 "trtype": "tcp", 00:08:07.463 "traddr": "10.0.0.3", 00:08:07.463 "adrfam": "ipv4", 00:08:07.463 "trsvcid": "4420", 00:08:07.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.463 "hdgst": false, 00:08:07.463 "ddgst": false 00:08:07.463 }, 00:08:07.463 "method": "bdev_nvme_attach_controller" 00:08:07.463 }' 00:08:07.463 [2024-12-06 12:15:53.941735] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:07.463 [2024-12-06 12:15:53.941843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64907 ] 00:08:07.463 [2024-12-06 12:15:54.094863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.722 [2024-12-06 12:15:54.135785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.722 [2024-12-06 12:15:54.177840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.722 Running I/O for 10 seconds... 00:08:10.034 6994.00 IOPS, 54.64 MiB/s [2024-12-06T12:15:57.629Z] 7105.00 IOPS, 55.51 MiB/s [2024-12-06T12:15:58.564Z] 7161.33 IOPS, 55.95 MiB/s [2024-12-06T12:15:59.496Z] 7177.50 IOPS, 56.07 MiB/s [2024-12-06T12:16:00.430Z] 7210.80 IOPS, 56.33 MiB/s [2024-12-06T12:16:01.366Z] 7222.17 IOPS, 56.42 MiB/s [2024-12-06T12:16:02.298Z] 7239.57 IOPS, 56.56 MiB/s [2024-12-06T12:16:03.672Z] 7254.75 IOPS, 56.68 MiB/s [2024-12-06T12:16:04.610Z] 7262.67 IOPS, 56.74 MiB/s [2024-12-06T12:16:04.610Z] 7270.50 IOPS, 56.80 MiB/s 00:08:17.952 Latency(us) 00:08:17.952 [2024-12-06T12:16:04.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:17.952 Verification LBA range: start 0x0 length 0x1000 00:08:17.952 Nvme1n1 : 10.01 7273.21 56.82 0.00 0.00 17543.67 1593.72 32648.84 00:08:17.952 [2024-12-06T12:16:04.610Z] =================================================================================================================== 00:08:17.952 [2024-12-06T12:16:04.610Z] Total : 7273.21 56.82 0.00 0.00 17543.67 1593.72 32648.84 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65030 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.952 { 00:08:17.952 "params": { 00:08:17.952 "name": "Nvme$subsystem", 00:08:17.952 "trtype": "$TEST_TRANSPORT", 00:08:17.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.952 "adrfam": "ipv4", 00:08:17.952 "trsvcid": "$NVMF_PORT", 00:08:17.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.952 "hdgst": ${hdgst:-false}, 00:08:17.952 "ddgst": ${ddgst:-false} 00:08:17.952 }, 00:08:17.952 "method": "bdev_nvme_attach_controller" 00:08:17.952 } 00:08:17.952 EOF 00:08:17.952 )") 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:17.952 [2024-12-06 12:16:04.435594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.435663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:17.952 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.952 "params": { 00:08:17.952 "name": "Nvme1", 00:08:17.952 "trtype": "tcp", 00:08:17.952 "traddr": "10.0.0.3", 00:08:17.952 "adrfam": "ipv4", 00:08:17.952 "trsvcid": "4420", 00:08:17.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.952 "hdgst": false, 00:08:17.952 "ddgst": false 00:08:17.952 }, 00:08:17.952 "method": "bdev_nvme_attach_controller" 00:08:17.952 }' 00:08:17.952 [2024-12-06 12:16:04.447559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.447600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.459558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.459598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.471550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.471589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.482547] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:17.952 [2024-12-06 12:16:04.482641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65030 ] 00:08:17.952 [2024-12-06 12:16:04.483536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.483575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.495552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.495592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.507553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.507576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.519543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.519582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.531525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.531549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.543529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.543552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.555578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.555603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.567534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.567558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.579539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.579565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.952 [2024-12-06 12:16:04.591556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.952 [2024-12-06 12:16:04.591579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.953 [2024-12-06 12:16:04.603578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.953 [2024-12-06 12:16:04.603618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.615563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.615601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.625429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.212 [2024-12-06 12:16:04.627566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.627605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.639665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.639710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.651611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.651651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.657098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.212 [2024-12-06 12:16:04.663609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.663649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.675642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.675698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.687670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.687721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.695467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.212 [2024-12-06 12:16:04.699648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.699691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.711650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.711698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.723671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.723716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.735666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.735710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.212 [2024-12-06 12:16:04.747662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.212 [2024-12-06 12:16:04.747705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.759686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.759729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.771675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.771719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.783684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.783729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 Running I/O for 5 seconds... 00:08:18.213 [2024-12-06 12:16:04.795709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.795751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.812293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.812327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.829746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.829792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.846836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.846882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.213 [2024-12-06 12:16:04.863955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.213 [2024-12-06 12:16:04.864003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.879139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.879184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.893966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.894013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.905110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.905157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.921131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.921210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.937799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.937846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.955165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.955208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.971953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.972000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:04.989004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:04.989050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.004966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.005012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.016061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.016107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.031236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.031270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.048497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.048544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.065055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.065101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.082231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.082277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.098802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.098848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.472 [2024-12-06 12:16:05.116456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.472 [2024-12-06 12:16:05.116502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.133712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.133758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.149008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.149053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.160157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.160213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.176008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.176054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.192812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.192859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.209441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.209474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.732 [2024-12-06 12:16:05.226456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.732 [2024-12-06 12:16:05.226490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.243355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.243403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.259527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.259573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.276437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.276470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.292967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.293014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.310180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.310236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.326896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.326942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.344038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.344085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.361671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.361718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.733 [2024-12-06 12:16:05.376315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.733 [2024-12-06 12:16:05.376353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.391718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.391768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.409473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.409519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.425721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.425768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.442959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.443027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.459584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.459631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.476742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.476789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.493267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.493313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.509863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.509910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.525829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.525876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.542921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.542967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.559074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.559108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.575732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.575778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.592457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.592491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.609410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.609458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.626861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.626908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.994 [2024-12-06 12:16:05.642037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.994 [2024-12-06 12:16:05.642083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.657631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.657681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.674474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.674523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.691557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.691603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.707994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.708041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.724084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.724132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.733466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.733501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.749355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.749388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.764054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.764102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.779328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.779376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 13524.00 IOPS, 105.66 MiB/s [2024-12-06T12:16:05.918Z] [2024-12-06 12:16:05.796802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.796831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.812018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.812065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.822884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.822930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.838518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.838582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.855932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.856111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.872645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.872678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.890143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.890205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.260 [2024-12-06 12:16:05.904958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.260 [2024-12-06 12:16:05.904990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:05.920491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:05.920527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:05.938112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:05.938145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:05.953451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:05.953484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:05.969810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:05.969841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:05.986570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:05.986744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.003690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.003721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.020797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.020968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.036935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.037104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.048506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.048689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.064558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.064743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.081361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.081519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.097922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.098091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.114420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.114581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.131046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.131225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.146067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.146260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.162333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.162492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.544 [2024-12-06 12:16:06.178606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.544 [2024-12-06 12:16:06.178783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.188586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.188753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.203667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.203867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.220614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.220783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.236899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.237067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.253406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.253565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.270643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.270812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.285951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.286120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.303261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.303437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.319779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.319946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.335916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.335948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.352742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.352911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.369520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.369569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.386456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.386508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.401785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.401817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.411552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.411711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.426440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.426589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.437928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.438091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.812 [2024-12-06 12:16:06.454463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.812 [2024-12-06 12:16:06.454495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.470430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.470463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.487902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.488072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.503429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.503462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.514613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.514782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.530868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.530900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.548301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.548333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.564951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.564982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.582390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.582422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.070 [2024-12-06 12:16:06.598434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.070 [2024-12-06 12:16:06.598466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.615535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.615597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.632425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.632457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.649671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.649842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.666996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.667067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.683594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.683763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.699478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.699525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.071 [2024-12-06 12:16:06.710284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.071 [2024-12-06 12:16:06.710318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.726853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.726886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.742847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.742879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.760050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.760081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.778763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.778797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.793090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.793122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 13467.50 IOPS, 105.21 MiB/s [2024-12-06T12:16:06.988Z] [2024-12-06 12:16:06.808780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.808828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.828079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.828273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.842276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.842308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.857404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.857436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.868483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.868515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.884190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.884386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.900694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.900862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.918164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.918363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.933278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.933438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.951050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.951224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.965537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.965721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.330 [2024-12-06 12:16:06.982357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.330 [2024-12-06 12:16:06.982518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:06.997071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:06.997270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.012609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.012778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.030294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.030466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.045954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.046122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.063149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.063316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.080033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.080246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.096407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.096578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.112876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.113044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.130400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.130586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.146907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.147067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.164430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.164617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.180598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.180765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.197907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.198075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.214709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.214881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.589 [2024-12-06 12:16:07.231669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.589 [2024-12-06 12:16:07.231837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.249357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.249518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.265001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.265199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.282481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.282669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.297815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.297983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.313445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.313477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.330727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.330757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.347736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.347767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.364750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.364920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.380138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.380319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.396721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.396753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.413449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.413483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.428064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.428095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.445556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.445587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.460890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.460920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.478230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.478262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.849 [2024-12-06 12:16:07.494433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.849 [2024-12-06 12:16:07.494624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.511297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.511346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.528708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.528739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.545378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.545409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.561265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.561296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.578446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.578479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.595426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.595457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.612150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.612222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.628475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.628635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.645836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.645867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.661946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.661976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.679757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.679927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.695293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.695356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.706036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.706067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.722481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.722668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.738648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.738679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.108 [2024-12-06 12:16:07.755757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.108 [2024-12-06 12:16:07.755787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.772098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.772129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.789537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.789584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 13484.33 IOPS, 105.35 MiB/s [2024-12-06T12:16:08.026Z] [2024-12-06 12:16:07.805810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.805843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.823449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.823479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.840356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.840388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.856381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.856415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.865994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.866026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.881376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.881408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.899853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.900029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.913898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.913930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.930522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.930711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.945816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.945985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.956824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.956993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.972960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.972992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:07.989965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:07.989997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.368 [2024-12-06 12:16:08.006862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.368 [2024-12-06 12:16:08.006893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.025219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.025259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.040595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.040626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.051429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.051619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.067784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.067815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.085390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.085421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.100440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.100473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.116667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.116698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.133598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.133628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.150529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.150716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.167820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.167851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.184774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.184804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.201039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.201069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.218433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.218465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.234618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.234647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.252796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.252827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.628 [2024-12-06 12:16:08.267553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.628 [2024-12-06 12:16:08.267583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.285052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.285100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.299136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.299217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.315728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.315759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.332736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.332909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.349942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.349974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.367077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.367109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.383601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.383771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.394869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.395046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.409842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.409875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.421378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.421412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.887 [2024-12-06 12:16:08.438850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.887 [2024-12-06 12:16:08.438882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.888 [2024-12-06 12:16:08.453012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.888 [2024-12-06 12:16:08.453042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.888 [2024-12-06 12:16:08.468061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.888 [2024-12-06 12:16:08.468264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.888 [2024-12-06 12:16:08.484884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.888 [2024-12-06 12:16:08.484932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.888 [2024-12-06 12:16:08.501246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.888 [2024-12-06 12:16:08.501278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.888 [2024-12-06 12:16:08.518207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.888 [2024-12-06 12:16:08.518253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.888 [2024-12-06 12:16:08.533826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.888 [2024-12-06 12:16:08.533857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.146 [2024-12-06 12:16:08.545434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.146 [2024-12-06 12:16:08.545483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.146 [2024-12-06 12:16:08.561533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.561563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.578921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.578952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.594016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.594046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.609186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.609245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.626581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.626612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.643361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.643408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.660263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.660294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.677436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.677467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.694214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.694244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.710126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.710332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.727508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.727540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.743912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.743943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.760932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.760963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.777831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.777863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.147 [2024-12-06 12:16:08.794805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.147 [2024-12-06 12:16:08.794835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 13496.75 IOPS, 105.44 MiB/s [2024-12-06T12:16:09.065Z] [2024-12-06 12:16:08.811686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.811720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.829418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.829450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.844824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.844856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.862180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.862238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.878655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.878687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.895140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.895200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.912702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.912735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.927324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.927357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.943719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.943752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.958980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.959052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.973515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.973705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:08.988491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:08.988679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:09.004741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:09.004772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:09.021425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:09.021457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:09.038513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:09.038546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.407 [2024-12-06 12:16:09.054932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.407 [2024-12-06 12:16:09.054964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.070983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.071055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.087721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.087892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.103199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.103266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.120695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.120726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.137356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.137388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.153966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.153997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.170719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.170890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.187438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.187470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.204331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.204361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.220381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.220412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.237238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.237270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.254330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.254361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.666 [2024-12-06 12:16:09.271355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.666 [2024-12-06 12:16:09.271387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.667 [2024-12-06 12:16:09.288441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.667 [2024-12-06 12:16:09.288472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.667 [2024-12-06 12:16:09.305430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.667 [2024-12-06 12:16:09.305460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.322756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.322788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.338138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.338216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.355831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.356003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.371036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.371212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.387492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.387522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.404314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.404344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.926 [2024-12-06 12:16:09.420714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.926 [2024-12-06 12:16:09.420745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.436740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.436771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.454407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.454441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.469215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.469279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.485552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.485630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.501638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.501669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.519010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.519043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.535525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.535556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.552886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.553056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.927 [2024-12-06 12:16:09.568334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.927 [2024-12-06 12:16:09.568366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.583897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.584066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.601735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.601764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.617156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.617230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.628105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.628136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.643728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.643759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.661600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.661631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.677097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.677129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.694378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.694410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.711181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.711260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.728463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.728493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.745276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.745308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.761591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.761622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.778924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.778955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 [2024-12-06 12:16:09.794193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.186 [2024-12-06 12:16:09.794237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.186 13498.60 IOPS, 105.46 MiB/s 00:08:23.186 Latency(us) 00:08:23.186 [2024-12-06T12:16:09.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.186 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:23.186 Nvme1n1 : 5.01 13503.11 105.49 0.00 0.00 9469.39 4081.11 17277.67 00:08:23.186 [2024-12-06T12:16:09.845Z] =================================================================================================================== 00:08:23.187 [2024-12-06T12:16:09.845Z] Total : 13503.11 105.49 0.00 0.00 9469.39 4081.11 17277.67 00:08:23.187 [2024-12-06 12:16:09.804925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.187 [2024-12-06 12:16:09.804950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.187 [2024-12-06 12:16:09.816916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.187 [2024-12-06 12:16:09.817090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.187 [2024-12-06 12:16:09.828960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.187 [2024-12-06 12:16:09.829302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.187 [2024-12-06 12:16:09.840993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.187 [2024-12-06 12:16:09.841330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.852971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.853284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.864984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.865307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.876975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.877281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.888966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.889202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.900950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.901110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.912976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.913267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.924950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.925105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 [2024-12-06 12:16:09.936950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.446 [2024-12-06 12:16:09.937074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.446 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65030) - No such process 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65030 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.446 delay0 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.446 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:23.705 [2024-12-06 12:16:10.138393] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:30.273 Initializing NVMe Controllers 00:08:30.273 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.273 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:30.273 Initialization complete. Launching workers. 00:08:30.273 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:08:30.273 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:08:30.273 success 242, unsuccessful 121, failed 0 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.273 rmmod nvme_tcp 00:08:30.273 rmmod nvme_fabrics 00:08:30.273 rmmod nvme_keyring 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 64887 ']' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 64887 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 64887 ']' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 64887 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64887 00:08:30.273 killing process with pid 64887 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64887' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 64887 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 64887 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:30.273 00:08:30.273 real 0m23.841s 00:08:30.273 user 0m39.004s 00:08:30.273 sys 0m6.728s 00:08:30.273 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.273 ************************************ 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.274 END TEST nvmf_zcopy 00:08:30.274 ************************************ 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.274 ************************************ 00:08:30.274 START TEST nvmf_nmic 00:08:30.274 ************************************ 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:30.274 * Looking for test storage... 00:08:30.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.274 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:30.533 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.534 --rc genhtml_branch_coverage=1 00:08:30.534 --rc genhtml_function_coverage=1 00:08:30.534 --rc genhtml_legend=1 00:08:30.534 --rc geninfo_all_blocks=1 00:08:30.534 --rc geninfo_unexecuted_blocks=1 00:08:30.534 00:08:30.534 ' 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.534 --rc genhtml_branch_coverage=1 00:08:30.534 --rc genhtml_function_coverage=1 00:08:30.534 --rc genhtml_legend=1 00:08:30.534 --rc geninfo_all_blocks=1 00:08:30.534 --rc geninfo_unexecuted_blocks=1 00:08:30.534 00:08:30.534 ' 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.534 --rc genhtml_branch_coverage=1 00:08:30.534 --rc genhtml_function_coverage=1 00:08:30.534 --rc genhtml_legend=1 00:08:30.534 --rc geninfo_all_blocks=1 00:08:30.534 --rc geninfo_unexecuted_blocks=1 00:08:30.534 00:08:30.534 ' 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.534 --rc genhtml_branch_coverage=1 00:08:30.534 --rc genhtml_function_coverage=1 00:08:30.534 --rc genhtml_legend=1 00:08:30.534 --rc geninfo_all_blocks=1 00:08:30.534 --rc geninfo_unexecuted_blocks=1 00:08:30.534 00:08:30.534 ' 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:30.534 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.534 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:30.534 Cannot find device "nvmf_init_br" 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:30.534 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:30.534 Cannot find device "nvmf_init_br2" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:30.535 Cannot find device "nvmf_tgt_br" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.535 Cannot find device "nvmf_tgt_br2" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:30.535 Cannot find device "nvmf_init_br" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:30.535 Cannot find device "nvmf_init_br2" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:30.535 Cannot find device "nvmf_tgt_br" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:30.535 Cannot find device "nvmf_tgt_br2" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:30.535 Cannot find device "nvmf_br" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:30.535 Cannot find device "nvmf_init_if" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:30.535 Cannot find device "nvmf_init_if2" 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:30.535 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:30.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:30.793 00:08:30.793 --- 10.0.0.3 ping statistics --- 00:08:30.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.793 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:30.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:30.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:08:30.793 00:08:30.793 --- 10.0.0.4 ping statistics --- 00:08:30.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.793 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:30.793 00:08:30.793 --- 10.0.0.1 ping statistics --- 00:08:30.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.793 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:30.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:30.793 00:08:30.793 --- 10.0.0.2 ping statistics --- 00:08:30.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.793 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65409 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65409 00:08:30.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65409 ']' 00:08:30.793 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.794 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.794 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.794 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.794 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.052 [2024-12-06 12:16:17.493418] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:31.052 [2024-12-06 12:16:17.493506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.052 [2024-12-06 12:16:17.643146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.052 [2024-12-06 12:16:17.673897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.052 [2024-12-06 12:16:17.673950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.053 [2024-12-06 12:16:17.673960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.053 [2024-12-06 12:16:17.673966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.053 [2024-12-06 12:16:17.673972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.053 [2024-12-06 12:16:17.674830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.053 [2024-12-06 12:16:17.674912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.053 [2024-12-06 12:16:17.675571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.053 [2024-12-06 12:16:17.675628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.053 [2024-12-06 12:16:17.704126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 [2024-12-06 12:16:18.481561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 Malloc0 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 [2024-12-06 12:16:18.540283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:31.987 test case1: single bdev can't be used in multiple subsystems 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 [2024-12-06 12:16:18.564076] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:31.987 [2024-12-06 12:16:18.564113] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:31.987 [2024-12-06 12:16:18.564140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.987 request: 00:08:31.987 { 00:08:31.987 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:31.987 "namespace": { 00:08:31.987 "bdev_name": "Malloc0", 00:08:31.987 "no_auto_visible": false, 00:08:31.987 "hide_metadata": false 00:08:31.987 }, 00:08:31.987 "method": "nvmf_subsystem_add_ns", 00:08:31.987 "req_id": 1 00:08:31.987 } 00:08:31.987 Got JSON-RPC error response 00:08:31.987 response: 00:08:31.987 { 00:08:31.987 "code": -32602, 00:08:31.987 "message": "Invalid parameters" 00:08:31.987 } 00:08:31.987 Adding namespace failed - expected result. 00:08:31.987 test case2: host connect to nvmf target in multiple paths 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 [2024-12-06 12:16:18.576166] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.987 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:32.245 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:32.245 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.245 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:32.245 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.245 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:32.245 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:34.778 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:34.778 [global] 00:08:34.778 thread=1 00:08:34.778 invalidate=1 00:08:34.778 rw=write 00:08:34.778 time_based=1 00:08:34.778 runtime=1 00:08:34.778 ioengine=libaio 00:08:34.778 direct=1 00:08:34.778 bs=4096 00:08:34.778 iodepth=1 00:08:34.778 norandommap=0 00:08:34.778 numjobs=1 00:08:34.778 00:08:34.778 verify_dump=1 00:08:34.778 verify_backlog=512 00:08:34.778 verify_state_save=0 00:08:34.778 do_verify=1 00:08:34.778 verify=crc32c-intel 00:08:34.778 [job0] 00:08:34.778 filename=/dev/nvme0n1 00:08:34.778 Could not set queue depth (nvme0n1) 00:08:34.778 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:34.778 fio-3.35 00:08:34.778 Starting 1 thread 00:08:35.715 00:08:35.715 job0: (groupid=0, jobs=1): err= 0: pid=65506: Fri Dec 6 12:16:22 2024 00:08:35.715 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:35.715 slat (nsec): min=10651, max=55567, avg=13317.47, stdev=4332.41 00:08:35.715 clat (usec): min=127, max=5024, avg=181.07, stdev=169.08 00:08:35.715 lat (usec): min=141, max=5038, avg=194.39, stdev=169.51 00:08:35.715 clat percentiles (usec): 00:08:35.715 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:08:35.716 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:08:35.716 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:08:35.716 | 99.00th=[ 233], 99.50th=[ 249], 99.90th=[ 3752], 99.95th=[ 3884], 00:08:35.716 | 99.99th=[ 5014] 00:08:35.716 write: IOPS=3198, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:08:35.716 slat (nsec): min=15799, max=95509, avg=19973.88, stdev=6227.11 00:08:35.716 clat (usec): min=76, max=395, avg=102.96, stdev=15.66 00:08:35.716 lat (usec): min=93, max=420, avg=122.94, stdev=17.61 00:08:35.716 clat percentiles (usec): 00:08:35.716 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 92], 00:08:35.716 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 103], 00:08:35.716 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 125], 95.00th=[ 133], 00:08:35.716 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 192], 00:08:35.716 | 99.99th=[ 396] 00:08:35.716 bw ( KiB/s): min=13584, max=13584, per=100.00%, avg=13584.00, stdev= 0.00, samples=1 00:08:35.716 iops : min= 3396, max= 3396, avg=3396.00, stdev= 0.00, samples=1 00:08:35.716 lat (usec) : 100=26.31%, 250=73.46%, 500=0.10%, 750=0.02% 00:08:35.716 lat (msec) : 2=0.02%, 4=0.08%, 10=0.02% 00:08:35.716 cpu : usr=2.20%, sys=8.10%, ctx=6274, majf=0, minf=5 00:08:35.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.716 issued rwts: total=3072,3202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:35.716 00:08:35.716 Run status group 0 (all jobs): 00:08:35.716 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:08:35.716 WRITE: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:08:35.716 00:08:35.716 Disk stats (read/write): 00:08:35.716 nvme0n1: ios=2660/3072, merge=0/0, ticks=523/365, in_queue=888, util=90.58% 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.716 rmmod nvme_tcp 00:08:35.716 rmmod nvme_fabrics 00:08:35.716 rmmod nvme_keyring 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65409 ']' 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65409 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65409 ']' 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65409 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:35.716 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65409 00:08:35.976 killing process with pid 65409 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65409' 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65409 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65409 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:35.976 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:36.236 00:08:36.236 real 0m6.006s 00:08:36.236 user 0m18.437s 00:08:36.236 sys 0m2.313s 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.236 ************************************ 00:08:36.236 END TEST nvmf_nmic 00:08:36.236 ************************************ 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.236 ************************************ 00:08:36.236 START TEST nvmf_fio_target 00:08:36.236 ************************************ 00:08:36.236 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:36.496 * Looking for test storage... 00:08:36.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.496 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.497 --rc genhtml_branch_coverage=1 00:08:36.497 --rc genhtml_function_coverage=1 00:08:36.497 --rc genhtml_legend=1 00:08:36.497 --rc geninfo_all_blocks=1 00:08:36.497 --rc geninfo_unexecuted_blocks=1 00:08:36.497 00:08:36.497 ' 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.497 --rc genhtml_branch_coverage=1 00:08:36.497 --rc genhtml_function_coverage=1 00:08:36.497 --rc genhtml_legend=1 00:08:36.497 --rc geninfo_all_blocks=1 00:08:36.497 --rc geninfo_unexecuted_blocks=1 00:08:36.497 00:08:36.497 ' 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.497 --rc genhtml_branch_coverage=1 00:08:36.497 --rc genhtml_function_coverage=1 00:08:36.497 --rc genhtml_legend=1 00:08:36.497 --rc geninfo_all_blocks=1 00:08:36.497 --rc geninfo_unexecuted_blocks=1 00:08:36.497 00:08:36.497 ' 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.497 --rc genhtml_branch_coverage=1 00:08:36.497 --rc genhtml_function_coverage=1 00:08:36.497 --rc genhtml_legend=1 00:08:36.497 --rc geninfo_all_blocks=1 00:08:36.497 --rc geninfo_unexecuted_blocks=1 00:08:36.497 00:08:36.497 ' 00:08:36.497 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.497 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:36.498 Cannot find device "nvmf_init_br" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:36.498 Cannot find device "nvmf_init_br2" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:36.498 Cannot find device "nvmf_tgt_br" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.498 Cannot find device "nvmf_tgt_br2" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:36.498 Cannot find device "nvmf_init_br" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:36.498 Cannot find device "nvmf_init_br2" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:36.498 Cannot find device "nvmf_tgt_br" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:36.498 Cannot find device "nvmf_tgt_br2" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:36.498 Cannot find device "nvmf_br" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:36.498 Cannot find device "nvmf_init_if" 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:36.498 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:36.758 Cannot find device "nvmf_init_if2" 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:36.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:36.759 00:08:36.759 --- 10.0.0.3 ping statistics --- 00:08:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.759 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:36.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:36.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:08:36.759 00:08:36.759 --- 10.0.0.4 ping statistics --- 00:08:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.759 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:36.759 00:08:36.759 --- 10.0.0.1 ping statistics --- 00:08:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.759 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:36.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:36.759 00:08:36.759 --- 10.0.0.2 ping statistics --- 00:08:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.759 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.759 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65734 00:08:37.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65734 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65734 ']' 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:37.039 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.039 [2024-12-06 12:16:23.504845] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:37.039 [2024-12-06 12:16:23.504932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.039 [2024-12-06 12:16:23.658393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.299 [2024-12-06 12:16:23.696780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.299 [2024-12-06 12:16:23.697071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.299 [2024-12-06 12:16:23.697098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.299 [2024-12-06 12:16:23.697109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.299 [2024-12-06 12:16:23.697120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.299 [2024-12-06 12:16:23.698077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.299 [2024-12-06 12:16:23.698202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.299 [2024-12-06 12:16:23.698388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.299 [2024-12-06 12:16:23.698397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.299 [2024-12-06 12:16:23.733325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.867 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:38.126 [2024-12-06 12:16:24.762198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.385 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.644 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:38.644 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.644 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:38.644 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.903 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:38.903 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:39.163 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:39.163 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:39.422 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:39.681 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:39.681 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:39.940 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:39.940 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.200 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:40.200 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:40.460 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.719 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:40.719 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.978 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:40.978 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:41.237 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:41.496 [2024-12-06 12:16:28.033297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:41.496 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:41.755 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:42.015 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:44.550 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:44.550 [global] 00:08:44.550 thread=1 00:08:44.550 invalidate=1 00:08:44.550 rw=write 00:08:44.550 time_based=1 00:08:44.550 runtime=1 00:08:44.550 ioengine=libaio 00:08:44.550 direct=1 00:08:44.550 bs=4096 00:08:44.550 iodepth=1 00:08:44.550 norandommap=0 00:08:44.550 numjobs=1 00:08:44.550 00:08:44.550 verify_dump=1 00:08:44.550 verify_backlog=512 00:08:44.550 verify_state_save=0 00:08:44.550 do_verify=1 00:08:44.550 verify=crc32c-intel 00:08:44.550 [job0] 00:08:44.550 filename=/dev/nvme0n1 00:08:44.550 [job1] 00:08:44.550 filename=/dev/nvme0n2 00:08:44.550 [job2] 00:08:44.550 filename=/dev/nvme0n3 00:08:44.550 [job3] 00:08:44.550 filename=/dev/nvme0n4 00:08:44.550 Could not set queue depth (nvme0n1) 00:08:44.550 Could not set queue depth (nvme0n2) 00:08:44.550 Could not set queue depth (nvme0n3) 00:08:44.550 Could not set queue depth (nvme0n4) 00:08:44.550 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:44.550 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:44.550 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:44.550 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:44.550 fio-3.35 00:08:44.550 Starting 4 threads 00:08:45.489 00:08:45.489 job0: (groupid=0, jobs=1): err= 0: pid=65918: Fri Dec 6 12:16:32 2024 00:08:45.489 read: IOPS=3009, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec) 00:08:45.489 slat (nsec): min=10985, max=47052, avg=13168.64, stdev=3618.69 00:08:45.489 clat (usec): min=137, max=521, avg=169.92, stdev=20.71 00:08:45.489 lat (usec): min=149, max=539, avg=183.09, stdev=21.29 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:08:45.489 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:08:45.489 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:08:45.489 | 99.00th=[ 219], 99.50th=[ 233], 99.90th=[ 379], 99.95th=[ 465], 00:08:45.489 | 99.99th=[ 523] 00:08:45.489 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:45.489 slat (usec): min=13, max=142, avg=20.65, stdev= 6.80 00:08:45.489 clat (usec): min=91, max=301, avg=122.23, stdev=18.02 00:08:45.489 lat (usec): min=109, max=335, avg=142.88, stdev=20.02 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 108], 00:08:45.489 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 124], 00:08:45.489 | 70.00th=[ 128], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 153], 00:08:45.489 | 99.00th=[ 176], 99.50th=[ 206], 99.90th=[ 245], 99.95th=[ 281], 00:08:45.489 | 99.99th=[ 302] 00:08:45.489 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:45.489 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:45.489 lat (usec) : 100=2.05%, 250=97.68%, 500=0.25%, 750=0.02% 00:08:45.489 cpu : usr=2.40%, sys=7.70%, ctx=6085, majf=0, minf=7 00:08:45.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.489 issued rwts: total=3013,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.489 job1: (groupid=0, jobs=1): err= 0: pid=65919: Fri Dec 6 12:16:32 2024 00:08:45.489 read: IOPS=1909, BW=7636KiB/s (7820kB/s)(7644KiB/1001msec) 00:08:45.489 slat (nsec): min=7877, max=78366, avg=11872.04, stdev=3350.21 00:08:45.489 clat (usec): min=189, max=2066, avg=271.39, stdev=50.19 00:08:45.489 lat (usec): min=204, max=2078, avg=283.26, stdev=50.38 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:08:45.489 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:08:45.489 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:08:45.489 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 898], 99.95th=[ 2073], 00:08:45.489 | 99.99th=[ 2073] 00:08:45.489 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:45.489 slat (usec): min=9, max=153, avg=16.94, stdev= 6.23 00:08:45.489 clat (usec): min=144, max=802, avg=204.67, stdev=25.04 00:08:45.489 lat (usec): min=166, max=820, avg=221.61, stdev=25.87 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:08:45.489 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:08:45.489 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 241], 00:08:45.489 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 293], 00:08:45.489 | 99.99th=[ 799] 00:08:45.489 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:45.489 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:45.489 lat (usec) : 250=61.10%, 500=38.80%, 750=0.03%, 1000=0.05% 00:08:45.489 lat (msec) : 4=0.03% 00:08:45.489 cpu : usr=1.80%, sys=4.20%, ctx=3961, majf=0, minf=8 00:08:45.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.489 issued rwts: total=1911,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.489 job2: (groupid=0, jobs=1): err= 0: pid=65921: Fri Dec 6 12:16:32 2024 00:08:45.489 read: IOPS=2831, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:08:45.489 slat (nsec): min=10577, max=60391, avg=13227.19, stdev=3914.02 00:08:45.489 clat (usec): min=140, max=250, avg=173.55, stdev=17.31 00:08:45.489 lat (usec): min=152, max=264, avg=186.78, stdev=17.99 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:08:45.489 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:08:45.489 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 206], 00:08:45.489 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 249], 99.95th=[ 249], 00:08:45.489 | 99.99th=[ 251] 00:08:45.489 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:45.489 slat (nsec): min=13180, max=94192, avg=20556.28, stdev=5993.94 00:08:45.489 clat (usec): min=97, max=834, avg=129.96, stdev=22.77 00:08:45.489 lat (usec): min=114, max=862, avg=150.52, stdev=24.10 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 103], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 116], 00:08:45.489 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 133], 00:08:45.489 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 161], 00:08:45.489 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 223], 99.95th=[ 562], 00:08:45.489 | 99.99th=[ 832] 00:08:45.489 bw ( KiB/s): min=12312, max=12312, per=30.09%, avg=12312.00, stdev= 0.00, samples=1 00:08:45.489 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:08:45.489 lat (usec) : 100=0.15%, 250=99.78%, 500=0.03%, 750=0.02%, 1000=0.02% 00:08:45.489 cpu : usr=1.90%, sys=8.00%, ctx=5907, majf=0, minf=17 00:08:45.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.489 issued rwts: total=2834,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.489 job3: (groupid=0, jobs=1): err= 0: pid=65922: Fri Dec 6 12:16:32 2024 00:08:45.489 read: IOPS=1909, BW=7636KiB/s (7820kB/s)(7644KiB/1001msec) 00:08:45.489 slat (nsec): min=10863, max=45020, avg=13829.82, stdev=4063.85 00:08:45.489 clat (usec): min=210, max=2157, avg=269.15, stdev=51.71 00:08:45.489 lat (usec): min=222, max=2169, avg=282.98, stdev=51.66 00:08:45.489 clat percentiles (usec): 00:08:45.489 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:08:45.489 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:08:45.489 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:08:45.489 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 955], 99.95th=[ 2147], 00:08:45.489 | 99.99th=[ 2147] 00:08:45.489 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:45.490 slat (nsec): min=10519, max=72931, avg=20016.34, stdev=6326.09 00:08:45.490 clat (usec): min=131, max=767, avg=201.37, stdev=24.18 00:08:45.490 lat (usec): min=171, max=793, avg=221.39, stdev=25.15 00:08:45.490 clat percentiles (usec): 00:08:45.490 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:08:45.490 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:08:45.490 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:08:45.490 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 281], 00:08:45.490 | 99.99th=[ 766] 00:08:45.490 bw ( KiB/s): min= 8208, max= 8208, per=20.06%, avg=8208.00, stdev= 0.00, samples=1 00:08:45.490 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:08:45.490 lat (usec) : 250=61.76%, 500=38.17%, 1000=0.05% 00:08:45.490 lat (msec) : 4=0.03% 00:08:45.490 cpu : usr=1.50%, sys=5.70%, ctx=3959, majf=0, minf=7 00:08:45.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.490 issued rwts: total=1911,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.490 00:08:45.490 Run status group 0 (all jobs): 00:08:45.490 READ: bw=37.7MiB/s (39.6MB/s), 7636KiB/s-11.8MiB/s (7820kB/s-12.3MB/s), io=37.8MiB (39.6MB), run=1001-1001msec 00:08:45.490 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:08:45.490 00:08:45.490 Disk stats (read/write): 00:08:45.490 nvme0n1: ios=2610/2712, merge=0/0, ticks=477/353, in_queue=830, util=88.48% 00:08:45.490 nvme0n2: ios=1583/1920, merge=0/0, ticks=432/373, in_queue=805, util=88.87% 00:08:45.490 nvme0n3: ios=2505/2560, merge=0/0, ticks=435/354, in_queue=789, util=89.16% 00:08:45.490 nvme0n4: ios=1536/1919, merge=0/0, ticks=416/384, in_queue=800, util=89.72% 00:08:45.490 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:45.490 [global] 00:08:45.490 thread=1 00:08:45.490 invalidate=1 00:08:45.490 rw=randwrite 00:08:45.490 time_based=1 00:08:45.490 runtime=1 00:08:45.490 ioengine=libaio 00:08:45.490 direct=1 00:08:45.490 bs=4096 00:08:45.490 iodepth=1 00:08:45.490 norandommap=0 00:08:45.490 numjobs=1 00:08:45.490 00:08:45.490 verify_dump=1 00:08:45.490 verify_backlog=512 00:08:45.490 verify_state_save=0 00:08:45.490 do_verify=1 00:08:45.490 verify=crc32c-intel 00:08:45.490 [job0] 00:08:45.490 filename=/dev/nvme0n1 00:08:45.490 [job1] 00:08:45.490 filename=/dev/nvme0n2 00:08:45.490 [job2] 00:08:45.490 filename=/dev/nvme0n3 00:08:45.490 [job3] 00:08:45.490 filename=/dev/nvme0n4 00:08:45.749 Could not set queue depth (nvme0n1) 00:08:45.749 Could not set queue depth (nvme0n2) 00:08:45.749 Could not set queue depth (nvme0n3) 00:08:45.749 Could not set queue depth (nvme0n4) 00:08:45.749 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:45.749 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:45.749 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:45.749 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:45.749 fio-3.35 00:08:45.749 Starting 4 threads 00:08:47.131 00:08:47.131 job0: (groupid=0, jobs=1): err= 0: pid=65981: Fri Dec 6 12:16:33 2024 00:08:47.131 read: IOPS=3054, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:08:47.131 slat (nsec): min=10794, max=53597, avg=12707.62, stdev=3407.35 00:08:47.131 clat (usec): min=137, max=389, avg=168.14, stdev=18.42 00:08:47.131 lat (usec): min=150, max=401, avg=180.85, stdev=18.72 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:08:47.131 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:08:47.131 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:08:47.131 | 99.00th=[ 212], 99.50th=[ 239], 99.90th=[ 371], 99.95th=[ 383], 00:08:47.131 | 99.99th=[ 392] 00:08:47.131 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:47.131 slat (nsec): min=13256, max=95148, avg=19623.10, stdev=5836.47 00:08:47.131 clat (usec): min=91, max=1578, avg=122.85, stdev=34.27 00:08:47.131 lat (usec): min=108, max=1597, avg=142.47, stdev=34.93 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:08:47.131 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 124], 00:08:47.131 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 155], 00:08:47.131 | 99.00th=[ 176], 99.50th=[ 202], 99.90th=[ 445], 99.95th=[ 619], 00:08:47.131 | 99.99th=[ 1582] 00:08:47.131 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:47.131 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:47.131 lat (usec) : 100=2.64%, 250=97.00%, 500=0.33%, 750=0.02% 00:08:47.131 lat (msec) : 2=0.02% 00:08:47.131 cpu : usr=2.20%, sys=7.80%, ctx=6130, majf=0, minf=11 00:08:47.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 issued rwts: total=3058,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.131 job1: (groupid=0, jobs=1): err= 0: pid=65982: Fri Dec 6 12:16:33 2024 00:08:47.131 read: IOPS=1732, BW=6929KiB/s (7095kB/s)(6936KiB/1001msec) 00:08:47.131 slat (nsec): min=11075, max=65944, avg=17328.95, stdev=5504.79 00:08:47.131 clat (usec): min=154, max=636, avg=292.17, stdev=58.20 00:08:47.131 lat (usec): min=169, max=659, avg=309.50, stdev=60.14 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:08:47.131 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:08:47.131 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 396], 00:08:47.131 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 635], 00:08:47.131 | 99.99th=[ 635] 00:08:47.131 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:47.131 slat (nsec): min=16717, max=94448, avg=23416.85, stdev=6305.04 00:08:47.131 clat (usec): min=98, max=975, avg=199.28, stdev=37.88 00:08:47.131 lat (usec): min=120, max=1001, avg=222.70, stdev=38.60 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 112], 5.00th=[ 126], 10.00th=[ 161], 20.00th=[ 184], 00:08:47.131 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:08:47.131 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 247], 00:08:47.131 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 515], 99.95th=[ 519], 00:08:47.131 | 99.99th=[ 979] 00:08:47.131 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:47.131 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:47.131 lat (usec) : 100=0.05%, 250=55.79%, 500=42.60%, 750=1.53%, 1000=0.03% 00:08:47.131 cpu : usr=1.00%, sys=6.60%, ctx=3786, majf=0, minf=15 00:08:47.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 issued rwts: total=1734,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.131 job2: (groupid=0, jobs=1): err= 0: pid=65983: Fri Dec 6 12:16:33 2024 00:08:47.131 read: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:08:47.131 slat (nsec): min=10932, max=45885, avg=13516.34, stdev=3720.88 00:08:47.131 clat (usec): min=144, max=584, avg=183.06, stdev=20.18 00:08:47.131 lat (usec): min=156, max=596, avg=196.58, stdev=20.49 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:08:47.131 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:08:47.131 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:08:47.131 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 269], 99.95th=[ 310], 00:08:47.131 | 99.99th=[ 586] 00:08:47.131 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:47.131 slat (nsec): min=12994, max=83142, avg=19868.33, stdev=5684.42 00:08:47.131 clat (usec): min=99, max=949, avg=137.83, stdev=25.38 00:08:47.131 lat (usec): min=117, max=966, avg=157.70, stdev=26.00 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 120], 20.00th=[ 124], 00:08:47.131 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:08:47.131 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 167], 00:08:47.131 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 750], 00:08:47.131 | 99.99th=[ 947] 00:08:47.131 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:47.131 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:47.131 lat (usec) : 100=0.02%, 250=99.84%, 500=0.07%, 750=0.04%, 1000=0.04% 00:08:47.131 cpu : usr=1.70%, sys=7.70%, ctx=5654, majf=0, minf=5 00:08:47.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 issued rwts: total=2581,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.131 job3: (groupid=0, jobs=1): err= 0: pid=65984: Fri Dec 6 12:16:33 2024 00:08:47.131 read: IOPS=1711, BW=6845KiB/s (7009kB/s)(6852KiB/1001msec) 00:08:47.131 slat (nsec): min=11419, max=42202, avg=14808.11, stdev=3603.70 00:08:47.131 clat (usec): min=180, max=545, avg=287.16, stdev=35.66 00:08:47.131 lat (usec): min=204, max=571, avg=301.97, stdev=36.37 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:08:47.131 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:08:47.131 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 343], 00:08:47.131 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 545], 99.95th=[ 545], 00:08:47.131 | 99.99th=[ 545] 00:08:47.131 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:47.131 slat (nsec): min=16546, max=96115, avg=23105.75, stdev=7468.24 00:08:47.131 clat (usec): min=108, max=6576, avg=209.40, stdev=172.10 00:08:47.131 lat (usec): min=129, max=6597, avg=232.51, stdev=172.56 00:08:47.131 clat percentiles (usec): 00:08:47.131 | 1.00th=[ 126], 5.00th=[ 143], 10.00th=[ 169], 20.00th=[ 184], 00:08:47.131 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:08:47.131 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 247], 00:08:47.131 | 99.00th=[ 343], 99.50th=[ 388], 99.90th=[ 2147], 99.95th=[ 3359], 00:08:47.131 | 99.99th=[ 6587] 00:08:47.131 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:47.131 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:47.131 lat (usec) : 250=54.67%, 500=44.96%, 750=0.21%, 1000=0.03% 00:08:47.131 lat (msec) : 2=0.05%, 4=0.05%, 10=0.03% 00:08:47.131 cpu : usr=1.20%, sys=5.90%, ctx=3761, majf=0, minf=17 00:08:47.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.131 issued rwts: total=1713,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.131 00:08:47.132 Run status group 0 (all jobs): 00:08:47.132 READ: bw=35.5MiB/s (37.2MB/s), 6845KiB/s-11.9MiB/s (7009kB/s-12.5MB/s), io=35.5MiB (37.2MB), run=1001-1001msec 00:08:47.132 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:08:47.132 00:08:47.132 Disk stats (read/write): 00:08:47.132 nvme0n1: ios=2609/2699, merge=0/0, ticks=473/355, in_queue=828, util=88.06% 00:08:47.132 nvme0n2: ios=1585/1699, merge=0/0, ticks=504/349, in_queue=853, util=89.60% 00:08:47.132 nvme0n3: ios=2262/2560, merge=0/0, ticks=418/368, in_queue=786, util=89.20% 00:08:47.132 nvme0n4: ios=1536/1653, merge=0/0, ticks=445/359, in_queue=804, util=89.04% 00:08:47.132 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:47.132 [global] 00:08:47.132 thread=1 00:08:47.132 invalidate=1 00:08:47.132 rw=write 00:08:47.132 time_based=1 00:08:47.132 runtime=1 00:08:47.132 ioengine=libaio 00:08:47.132 direct=1 00:08:47.132 bs=4096 00:08:47.132 iodepth=128 00:08:47.132 norandommap=0 00:08:47.132 numjobs=1 00:08:47.132 00:08:47.132 verify_dump=1 00:08:47.132 verify_backlog=512 00:08:47.132 verify_state_save=0 00:08:47.132 do_verify=1 00:08:47.132 verify=crc32c-intel 00:08:47.132 [job0] 00:08:47.132 filename=/dev/nvme0n1 00:08:47.132 [job1] 00:08:47.132 filename=/dev/nvme0n2 00:08:47.132 [job2] 00:08:47.132 filename=/dev/nvme0n3 00:08:47.132 [job3] 00:08:47.132 filename=/dev/nvme0n4 00:08:47.132 Could not set queue depth (nvme0n1) 00:08:47.132 Could not set queue depth (nvme0n2) 00:08:47.132 Could not set queue depth (nvme0n3) 00:08:47.132 Could not set queue depth (nvme0n4) 00:08:47.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:47.132 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:47.132 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:47.132 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:47.132 fio-3.35 00:08:47.132 Starting 4 threads 00:08:48.514 00:08:48.514 job0: (groupid=0, jobs=1): err= 0: pid=66044: Fri Dec 6 12:16:34 2024 00:08:48.514 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:08:48.514 slat (usec): min=6, max=13183, avg=183.73, stdev=812.66 00:08:48.514 clat (usec): min=7676, max=61009, avg=22121.21, stdev=8793.05 00:08:48.514 lat (usec): min=7691, max=61029, avg=22304.95, stdev=8883.52 00:08:48.514 clat percentiles (usec): 00:08:48.514 | 1.00th=[10159], 5.00th=[15139], 10.00th=[15401], 20.00th=[15664], 00:08:48.514 | 30.00th=[15795], 40.00th=[17433], 50.00th=[21627], 60.00th=[22152], 00:08:48.514 | 70.00th=[22414], 80.00th=[25560], 90.00th=[33424], 95.00th=[43779], 00:08:48.514 | 99.00th=[53740], 99.50th=[57934], 99.90th=[58983], 99.95th=[58983], 00:08:48.514 | 99.99th=[61080] 00:08:48.514 write: IOPS=2074, BW=8298KiB/s (8497kB/s)(8356KiB/1007msec); 0 zone resets 00:08:48.514 slat (usec): min=12, max=8099, avg=288.85, stdev=1000.40 00:08:48.514 clat (usec): min=4118, max=70558, avg=38841.05, stdev=16475.49 00:08:48.514 lat (usec): min=4794, max=70606, avg=39129.90, stdev=16566.20 00:08:48.514 clat percentiles (usec): 00:08:48.514 | 1.00th=[ 5407], 5.00th=[17695], 10.00th=[20841], 20.00th=[23725], 00:08:48.514 | 30.00th=[26084], 40.00th=[27657], 50.00th=[39584], 60.00th=[42206], 00:08:48.514 | 70.00th=[46400], 80.00th=[55313], 90.00th=[65274], 95.00th=[68682], 00:08:48.514 | 99.00th=[69731], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:08:48.514 | 99.99th=[70779] 00:08:48.514 bw ( KiB/s): min= 8175, max= 8192, per=12.76%, avg=8183.50, stdev=12.02, samples=2 00:08:48.514 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:08:48.514 lat (msec) : 10=1.26%, 20=26.18%, 50=58.64%, 100=13.92% 00:08:48.514 cpu : usr=3.18%, sys=6.36%, ctx=306, majf=0, minf=13 00:08:48.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:08:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.514 issued rwts: total=2048,2089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.514 job1: (groupid=0, jobs=1): err= 0: pid=66045: Fri Dec 6 12:16:34 2024 00:08:48.514 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:08:48.514 slat (usec): min=6, max=7610, avg=136.48, stdev=606.68 00:08:48.514 clat (usec): min=9766, max=36352, avg=17797.17, stdev=3553.39 00:08:48.514 lat (usec): min=11928, max=36401, avg=17933.66, stdev=3572.64 00:08:48.514 clat percentiles (usec): 00:08:48.514 | 1.00th=[12125], 5.00th=[13960], 10.00th=[14484], 20.00th=[15533], 00:08:48.514 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16057], 60.00th=[16909], 00:08:48.514 | 70.00th=[19268], 80.00th=[21627], 90.00th=[22152], 95.00th=[24511], 00:08:48.514 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:08:48.514 | 99.99th=[36439] 00:08:48.514 write: IOPS=3788, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1007msec); 0 zone resets 00:08:48.514 slat (usec): min=8, max=7534, avg=126.23, stdev=642.91 00:08:48.514 clat (usec): min=5459, max=45520, avg=16621.07, stdev=7288.97 00:08:48.514 lat (usec): min=6823, max=45547, avg=16747.30, stdev=7348.86 00:08:48.514 clat percentiles (usec): 00:08:48.514 | 1.00th=[ 9241], 5.00th=[11207], 10.00th=[11731], 20.00th=[11863], 00:08:48.514 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13698], 60.00th=[14222], 00:08:48.514 | 70.00th=[17171], 80.00th=[20055], 90.00th=[25822], 95.00th=[35914], 00:08:48.514 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:08:48.514 | 99.99th=[45351] 00:08:48.514 bw ( KiB/s): min=13093, max=16384, per=22.98%, avg=14738.50, stdev=2327.09, samples=2 00:08:48.514 iops : min= 3273, max= 4096, avg=3684.50, stdev=581.95, samples=2 00:08:48.514 lat (msec) : 10=1.35%, 20=75.82%, 50=22.83% 00:08:48.514 cpu : usr=3.98%, sys=9.64%, ctx=288, majf=0, minf=15 00:08:48.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:48.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.514 issued rwts: total=3584,3815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.514 job2: (groupid=0, jobs=1): err= 0: pid=66046: Fri Dec 6 12:16:34 2024 00:08:48.514 read: IOPS=4610, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1002msec) 00:08:48.514 slat (usec): min=5, max=5703, avg=100.83, stdev=482.39 00:08:48.514 clat (usec): min=1329, max=16389, avg=13412.25, stdev=945.50 00:08:48.514 lat (usec): min=1353, max=16424, avg=13513.08, stdev=815.51 00:08:48.514 clat percentiles (usec): 00:08:48.514 | 1.00th=[10552], 5.00th=[12911], 10.00th=[13042], 20.00th=[13173], 00:08:48.514 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:08:48.514 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14222], 00:08:48.514 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16319], 99.95th=[16319], 00:08:48.514 | 99.99th=[16450] 00:08:48.514 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:48.514 slat (usec): min=9, max=3030, avg=96.81, stdev=418.85 00:08:48.514 clat (usec): min=3937, max=13731, avg=12610.58, stdev=951.34 00:08:48.514 lat (usec): min=3958, max=13768, avg=12707.39, stdev=858.59 00:08:48.514 clat percentiles (usec): 00:08:48.514 | 1.00th=[ 7570], 5.00th=[11863], 10.00th=[12387], 20.00th=[12518], 00:08:48.514 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:08:48.514 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:08:48.514 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:08:48.514 | 99.99th=[13698] 00:08:48.514 bw ( KiB/s): min=20480, max=20480, per=31.94%, avg=20480.00, stdev= 0.00, samples=1 00:08:48.515 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:48.515 lat (msec) : 2=0.12%, 4=0.03%, 10=0.77%, 20=99.08% 00:08:48.515 cpu : usr=4.60%, sys=13.29%, ctx=317, majf=0, minf=10 00:08:48.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.515 issued rwts: total=4620,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.515 job3: (groupid=0, jobs=1): err= 0: pid=66047: Fri Dec 6 12:16:34 2024 00:08:48.515 read: IOPS=4825, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1002msec) 00:08:48.515 slat (usec): min=5, max=3832, avg=98.11, stdev=386.08 00:08:48.515 clat (usec): min=1719, max=17259, avg=12943.09, stdev=1387.16 00:08:48.515 lat (usec): min=1732, max=17294, avg=13041.20, stdev=1420.41 00:08:48.515 clat percentiles (usec): 00:08:48.515 | 1.00th=[ 5932], 5.00th=[11338], 10.00th=[11994], 20.00th=[12518], 00:08:48.515 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:08:48.515 | 70.00th=[13304], 80.00th=[13435], 90.00th=[14222], 95.00th=[14877], 00:08:48.515 | 99.00th=[15664], 99.50th=[15664], 99.90th=[16909], 99.95th=[17171], 00:08:48.515 | 99.99th=[17171] 00:08:48.515 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:48.515 slat (usec): min=11, max=3540, avg=94.74, stdev=434.22 00:08:48.515 clat (usec): min=9529, max=17073, avg=12507.75, stdev=983.21 00:08:48.515 lat (usec): min=9550, max=17091, avg=12602.49, stdev=1061.85 00:08:48.515 clat percentiles (usec): 00:08:48.515 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11600], 20.00th=[11994], 00:08:48.515 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:08:48.515 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13829], 95.00th=[14615], 00:08:48.515 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16909], 99.95th=[16909], 00:08:48.515 | 99.99th=[17171] 00:08:48.515 bw ( KiB/s): min=20439, max=20480, per=31.90%, avg=20459.50, stdev=28.99, samples=2 00:08:48.515 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:08:48.515 lat (msec) : 2=0.15%, 4=0.20%, 10=0.93%, 20=98.71% 00:08:48.515 cpu : usr=5.49%, sys=13.39%, ctx=404, majf=0, minf=15 00:08:48.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:48.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.515 issued rwts: total=4835,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.515 00:08:48.515 Run status group 0 (all jobs): 00:08:48.515 READ: bw=58.5MiB/s (61.4MB/s), 8135KiB/s-18.8MiB/s (8330kB/s-19.8MB/s), io=58.9MiB (61.8MB), run=1002-1007msec 00:08:48.515 WRITE: bw=62.6MiB/s (65.7MB/s), 8298KiB/s-20.0MiB/s (8497kB/s-20.9MB/s), io=63.1MiB (66.1MB), run=1002-1007msec 00:08:48.515 00:08:48.515 Disk stats (read/write): 00:08:48.515 nvme0n1: ios=1586/1751, merge=0/0, ticks=12062/22429, in_queue=34491, util=86.76% 00:08:48.515 nvme0n2: ios=3095/3447, merge=0/0, ticks=26902/22762, in_queue=49664, util=87.39% 00:08:48.515 nvme0n3: ios=4096/4192, merge=0/0, ticks=12183/11513, in_queue=23696, util=89.04% 00:08:48.515 nvme0n4: ios=4096/4399, merge=0/0, ticks=16710/14897, in_queue=31607, util=89.61% 00:08:48.515 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:48.515 [global] 00:08:48.515 thread=1 00:08:48.515 invalidate=1 00:08:48.515 rw=randwrite 00:08:48.515 time_based=1 00:08:48.515 runtime=1 00:08:48.515 ioengine=libaio 00:08:48.515 direct=1 00:08:48.515 bs=4096 00:08:48.515 iodepth=128 00:08:48.515 norandommap=0 00:08:48.515 numjobs=1 00:08:48.515 00:08:48.515 verify_dump=1 00:08:48.515 verify_backlog=512 00:08:48.515 verify_state_save=0 00:08:48.515 do_verify=1 00:08:48.515 verify=crc32c-intel 00:08:48.515 [job0] 00:08:48.515 filename=/dev/nvme0n1 00:08:48.515 [job1] 00:08:48.515 filename=/dev/nvme0n2 00:08:48.515 [job2] 00:08:48.515 filename=/dev/nvme0n3 00:08:48.515 [job3] 00:08:48.515 filename=/dev/nvme0n4 00:08:48.515 Could not set queue depth (nvme0n1) 00:08:48.515 Could not set queue depth (nvme0n2) 00:08:48.515 Could not set queue depth (nvme0n3) 00:08:48.515 Could not set queue depth (nvme0n4) 00:08:48.515 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:48.515 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:48.515 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:48.515 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:48.515 fio-3.35 00:08:48.515 Starting 4 threads 00:08:49.894 00:08:49.894 job0: (groupid=0, jobs=1): err= 0: pid=66106: Fri Dec 6 12:16:36 2024 00:08:49.894 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:08:49.894 slat (usec): min=7, max=10523, avg=204.71, stdev=912.38 00:08:49.894 clat (usec): min=7189, max=35876, avg=25173.57, stdev=4087.45 00:08:49.894 lat (usec): min=7196, max=40808, avg=25378.28, stdev=4121.50 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[11600], 5.00th=[18220], 10.00th=[20841], 20.00th=[23462], 00:08:49.894 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:08:49.894 | 70.00th=[25822], 80.00th=[27395], 90.00th=[30802], 95.00th=[32900], 00:08:49.894 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:08:49.894 | 99.99th=[35914] 00:08:49.894 write: IOPS=2577, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1003msec); 0 zone resets 00:08:49.894 slat (usec): min=4, max=8215, avg=177.19, stdev=714.50 00:08:49.894 clat (usec): min=432, max=34382, avg=23610.83, stdev=3969.84 00:08:49.894 lat (usec): min=2741, max=34405, avg=23788.02, stdev=3981.58 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[12256], 5.00th=[17171], 10.00th=[18482], 20.00th=[21103], 00:08:49.894 | 30.00th=[22414], 40.00th=[22938], 50.00th=[24249], 60.00th=[25560], 00:08:49.894 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[28181], 00:08:49.894 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:08:49.894 | 99.99th=[34341] 00:08:49.894 bw ( KiB/s): min=11776, max=11776, per=18.34%, avg=11776.00, stdev= 0.00, samples=1 00:08:49.894 iops : min= 2944, max= 2944, avg=2944.00, stdev= 0.00, samples=1 00:08:49.894 lat (usec) : 500=0.02% 00:08:49.894 lat (msec) : 4=0.02%, 10=0.68%, 20=12.81%, 50=86.47% 00:08:49.894 cpu : usr=2.40%, sys=7.29%, ctx=740, majf=0, minf=17 00:08:49.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:49.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.894 issued rwts: total=2560,2585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.894 job1: (groupid=0, jobs=1): err= 0: pid=66107: Fri Dec 6 12:16:36 2024 00:08:49.894 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:08:49.894 slat (usec): min=5, max=4356, avg=88.22, stdev=385.11 00:08:49.894 clat (usec): min=2452, max=16038, avg=11688.04, stdev=1157.64 00:08:49.894 lat (usec): min=2466, max=16637, avg=11776.26, stdev=1165.58 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[ 6587], 5.00th=[10159], 10.00th=[10552], 20.00th=[11338], 00:08:49.894 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:08:49.894 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:08:49.894 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15401], 99.95th=[15664], 00:08:49.894 | 99.99th=[16057] 00:08:49.894 write: IOPS=5621, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:08:49.894 slat (usec): min=10, max=4576, avg=81.61, stdev=463.93 00:08:49.894 clat (usec): min=931, max=15963, avg=10813.20, stdev=998.03 00:08:49.894 lat (usec): min=2446, max=16015, avg=10894.81, stdev=1086.28 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:08:49.894 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:08:49.894 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:08:49.894 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15795], 99.95th=[15926], 00:08:49.894 | 99.99th=[15926] 00:08:49.894 bw ( KiB/s): min=24232, max=24232, per=37.74%, avg=24232.00, stdev= 0.00, samples=1 00:08:49.894 iops : min= 6058, max= 6058, avg=6058.00, stdev= 0.00, samples=1 00:08:49.894 lat (usec) : 1000=0.01% 00:08:49.894 lat (msec) : 4=0.20%, 10=8.07%, 20=91.73% 00:08:49.894 cpu : usr=4.90%, sys=14.79%, ctx=351, majf=0, minf=7 00:08:49.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:49.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.894 issued rwts: total=5632,5633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.894 job2: (groupid=0, jobs=1): err= 0: pid=66108: Fri Dec 6 12:16:36 2024 00:08:49.894 read: IOPS=4849, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1003msec) 00:08:49.894 slat (usec): min=8, max=6457, avg=95.70, stdev=561.55 00:08:49.894 clat (usec): min=1403, max=21014, avg=13188.23, stdev=1425.27 00:08:49.894 lat (usec): min=2451, max=24720, avg=13283.93, stdev=1453.22 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[ 8160], 5.00th=[11469], 10.00th=[12256], 20.00th=[12780], 00:08:49.894 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:08:49.894 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:08:49.894 | 99.00th=[17695], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:08:49.894 | 99.99th=[21103] 00:08:49.894 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:08:49.894 slat (usec): min=10, max=9233, avg=97.19, stdev=587.50 00:08:49.894 clat (usec): min=4880, max=17287, avg=12285.68, stdev=1186.69 00:08:49.894 lat (usec): min=4919, max=17309, avg=12382.87, stdev=1058.63 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[ 7635], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:08:49.894 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:08:49.894 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13435], 00:08:49.894 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:08:49.894 | 99.99th=[17171] 00:08:49.894 bw ( KiB/s): min=20480, max=20480, per=31.89%, avg=20480.00, stdev= 0.00, samples=2 00:08:49.894 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:49.894 lat (msec) : 2=0.01%, 4=0.10%, 10=3.51%, 20=96.12%, 50=0.26% 00:08:49.894 cpu : usr=5.19%, sys=12.38%, ctx=232, majf=0, minf=9 00:08:49.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:49.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.894 issued rwts: total=4864,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.894 job3: (groupid=0, jobs=1): err= 0: pid=66109: Fri Dec 6 12:16:36 2024 00:08:49.894 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:08:49.894 slat (usec): min=4, max=12399, avg=201.61, stdev=857.95 00:08:49.894 clat (usec): min=17433, max=41576, avg=25724.47, stdev=3686.89 00:08:49.894 lat (usec): min=17840, max=41607, avg=25926.08, stdev=3702.62 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[19006], 5.00th=[19792], 10.00th=[21890], 20.00th=[22938], 00:08:49.894 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:08:49.894 | 70.00th=[26346], 80.00th=[29230], 90.00th=[31327], 95.00th=[31851], 00:08:49.894 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:08:49.894 | 99.99th=[41681] 00:08:49.894 write: IOPS=2808, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1007msec); 0 zone resets 00:08:49.894 slat (usec): min=5, max=10153, avg=163.83, stdev=674.42 00:08:49.894 clat (usec): min=6171, max=33016, avg=21834.90, stdev=5017.11 00:08:49.894 lat (usec): min=6757, max=36694, avg=21998.73, stdev=5033.57 00:08:49.894 clat percentiles (usec): 00:08:49.894 | 1.00th=[11338], 5.00th=[13173], 10.00th=[14746], 20.00th=[17433], 00:08:49.894 | 30.00th=[18220], 40.00th=[21365], 50.00th=[22676], 60.00th=[23462], 00:08:49.894 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27132], 95.00th=[28443], 00:08:49.895 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[32375], 00:08:49.895 | 99.99th=[32900] 00:08:49.895 bw ( KiB/s): min= 9320, max=12288, per=16.82%, avg=10804.00, stdev=2098.69, samples=2 00:08:49.895 iops : min= 2330, max= 3072, avg=2701.00, stdev=524.67, samples=2 00:08:49.895 lat (msec) : 10=0.26%, 20=21.90%, 50=77.84% 00:08:49.895 cpu : usr=2.68%, sys=7.65%, ctx=707, majf=0, minf=11 00:08:49.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:49.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.895 issued rwts: total=2560,2828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.895 00:08:49.895 Run status group 0 (all jobs): 00:08:49.895 READ: bw=60.6MiB/s (63.5MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=61.0MiB (64.0MB), run=1002-1007msec 00:08:49.895 WRITE: bw=62.7MiB/s (65.8MB/s), 10.1MiB/s-22.0MiB/s (10.6MB/s-23.0MB/s), io=63.1MiB (66.2MB), run=1002-1007msec 00:08:49.895 00:08:49.895 Disk stats (read/write): 00:08:49.895 nvme0n1: ios=2098/2316, merge=0/0, ticks=25587/25147, in_queue=50734, util=86.16% 00:08:49.895 nvme0n2: ios=4657/5046, merge=0/0, ticks=25792/21957, in_queue=47749, util=90.08% 00:08:49.895 nvme0n3: ios=4133/4409, merge=0/0, ticks=51334/50176, in_queue=101510, util=90.80% 00:08:49.895 nvme0n4: ios=2048/2544, merge=0/0, ticks=25863/26097, in_queue=51960, util=89.18% 00:08:49.895 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:49.895 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66122 00:08:49.895 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:49.895 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:49.895 [global] 00:08:49.895 thread=1 00:08:49.895 invalidate=1 00:08:49.895 rw=read 00:08:49.895 time_based=1 00:08:49.895 runtime=10 00:08:49.895 ioengine=libaio 00:08:49.895 direct=1 00:08:49.895 bs=4096 00:08:49.895 iodepth=1 00:08:49.895 norandommap=1 00:08:49.895 numjobs=1 00:08:49.895 00:08:49.895 [job0] 00:08:49.895 filename=/dev/nvme0n1 00:08:49.895 [job1] 00:08:49.895 filename=/dev/nvme0n2 00:08:49.895 [job2] 00:08:49.895 filename=/dev/nvme0n3 00:08:49.895 [job3] 00:08:49.895 filename=/dev/nvme0n4 00:08:49.895 Could not set queue depth (nvme0n1) 00:08:49.895 Could not set queue depth (nvme0n2) 00:08:49.895 Could not set queue depth (nvme0n3) 00:08:49.895 Could not set queue depth (nvme0n4) 00:08:49.895 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.895 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.895 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.895 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.895 fio-3.35 00:08:49.895 Starting 4 threads 00:08:53.181 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:53.181 fio: pid=66165, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:53.181 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63639552, buflen=4096 00:08:53.181 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:53.181 fio: pid=66164, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:53.181 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48267264, buflen=4096 00:08:53.181 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:53.181 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:53.439 fio: pid=66162, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:53.439 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55173120, buflen=4096 00:08:53.439 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:53.439 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:53.697 fio: pid=66163, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:53.697 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15847424, buflen=4096 00:08:53.697 00:08:53.697 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66162: Fri Dec 6 12:16:40 2024 00:08:53.697 read: IOPS=3905, BW=15.3MiB/s (16.0MB/s)(52.6MiB/3449msec) 00:08:53.697 slat (usec): min=7, max=10741, avg=14.68, stdev=162.39 00:08:53.697 clat (usec): min=7, max=3625, avg=240.12, stdev=75.17 00:08:53.697 lat (usec): min=135, max=11160, avg=254.79, stdev=179.10 00:08:53.697 clat percentiles (usec): 00:08:53.697 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 182], 00:08:53.697 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 260], 00:08:53.697 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:08:53.698 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 644], 99.95th=[ 1500], 00:08:53.698 | 99.99th=[ 3523] 00:08:53.698 bw ( KiB/s): min=13960, max=17232, per=22.66%, avg=14920.00, stdev=1162.87, samples=6 00:08:53.698 iops : min= 3490, max= 4308, avg=3730.00, stdev=290.72, samples=6 00:08:53.698 lat (usec) : 10=0.01%, 250=49.95%, 500=49.88%, 750=0.07%, 1000=0.01% 00:08:53.698 lat (msec) : 2=0.04%, 4=0.03% 00:08:53.698 cpu : usr=1.02%, sys=4.12%, ctx=13486, majf=0, minf=1 00:08:53.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 issued rwts: total=13471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.698 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66163: Fri Dec 6 12:16:40 2024 00:08:53.698 read: IOPS=5462, BW=21.3MiB/s (22.4MB/s)(79.1MiB/3708msec) 00:08:53.698 slat (usec): min=10, max=11839, avg=15.69, stdev=177.89 00:08:53.698 clat (usec): min=3, max=1693, avg=166.05, stdev=32.43 00:08:53.698 lat (usec): min=131, max=11999, avg=181.74, stdev=181.23 00:08:53.698 clat percentiles (usec): 00:08:53.698 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:08:53.698 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:08:53.698 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 204], 00:08:53.698 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 412], 99.95th=[ 553], 00:08:53.698 | 99.99th=[ 1532] 00:08:53.698 bw ( KiB/s): min=19777, max=22816, per=33.22%, avg=21877.00, stdev=1135.65, samples=7 00:08:53.698 iops : min= 4944, max= 5704, avg=5469.14, stdev=284.02, samples=7 00:08:53.698 lat (usec) : 4=0.01%, 250=99.04%, 500=0.88%, 750=0.03%, 1000=0.01% 00:08:53.698 lat (msec) : 2=0.03% 00:08:53.698 cpu : usr=1.54%, sys=6.12%, ctx=20265, majf=0, minf=2 00:08:53.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 issued rwts: total=20254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.698 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66164: Fri Dec 6 12:16:40 2024 00:08:53.698 read: IOPS=3682, BW=14.4MiB/s (15.1MB/s)(46.0MiB/3200msec) 00:08:53.698 slat (usec): min=7, max=12772, avg=14.50, stdev=134.92 00:08:53.698 clat (usec): min=141, max=6999, avg=255.84, stdev=103.33 00:08:53.698 lat (usec): min=152, max=13055, avg=270.34, stdev=170.09 00:08:53.698 clat percentiles (usec): 00:08:53.698 | 1.00th=[ 163], 5.00th=[ 184], 10.00th=[ 217], 20.00th=[ 233], 00:08:53.698 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 00:08:53.698 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:08:53.698 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 1045], 99.95th=[ 2089], 00:08:53.698 | 99.99th=[ 4178] 00:08:53.698 bw ( KiB/s): min=14432, max=16024, per=22.48%, avg=14802.33, stdev=607.10, samples=6 00:08:53.698 iops : min= 3608, max= 4006, avg=3700.50, stdev=151.83, samples=6 00:08:53.698 lat (usec) : 250=43.04%, 500=56.78%, 750=0.03%, 1000=0.04% 00:08:53.698 lat (msec) : 2=0.03%, 4=0.05%, 10=0.02% 00:08:53.698 cpu : usr=1.06%, sys=4.28%, ctx=11796, majf=0, minf=2 00:08:53.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 issued rwts: total=11785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.698 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66165: Fri Dec 6 12:16:40 2024 00:08:53.698 read: IOPS=5261, BW=20.6MiB/s (21.6MB/s)(60.7MiB/2953msec) 00:08:53.698 slat (nsec): min=10920, max=70407, avg=13471.98, stdev=4145.65 00:08:53.698 clat (usec): min=139, max=1730, avg=175.48, stdev=23.07 00:08:53.698 lat (usec): min=151, max=1743, avg=188.95, stdev=23.64 00:08:53.698 clat percentiles (usec): 00:08:53.698 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:08:53.698 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:08:53.698 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 210], 00:08:53.698 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 249], 99.95th=[ 269], 00:08:53.698 | 99.99th=[ 619] 00:08:53.698 bw ( KiB/s): min=19920, max=21496, per=32.00%, avg=21070.40, stdev=656.42, samples=5 00:08:53.698 iops : min= 4980, max= 5374, avg=5267.60, stdev=164.11, samples=5 00:08:53.698 lat (usec) : 250=99.90%, 500=0.08%, 750=0.01% 00:08:53.698 lat (msec) : 2=0.01% 00:08:53.698 cpu : usr=1.19%, sys=6.10%, ctx=15538, majf=0, minf=2 00:08:53.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:53.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.698 issued rwts: total=15538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:53.698 00:08:53.698 Run status group 0 (all jobs): 00:08:53.698 READ: bw=64.3MiB/s (67.4MB/s), 14.4MiB/s-21.3MiB/s (15.1MB/s-22.4MB/s), io=238MiB (250MB), run=2953-3708msec 00:08:53.698 00:08:53.698 Disk stats (read/write): 00:08:53.698 nvme0n1: ios=13041/0, merge=0/0, ticks=3028/0, in_queue=3028, util=95.31% 00:08:53.698 nvme0n2: ios=19702/0, merge=0/0, ticks=3321/0, in_queue=3321, util=95.29% 00:08:53.698 nvme0n3: ios=11462/0, merge=0/0, ticks=2869/0, in_queue=2869, util=95.99% 00:08:53.698 nvme0n4: ios=15095/0, merge=0/0, ticks=2690/0, in_queue=2690, util=96.73% 00:08:53.698 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:53.698 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:53.957 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:53.957 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:54.216 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:54.216 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:54.475 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:54.475 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:54.742 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:54.742 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66122 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:55.052 nvmf hotplug test: fio failed as expected 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:55.052 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.371 rmmod nvme_tcp 00:08:55.371 rmmod nvme_fabrics 00:08:55.371 rmmod nvme_keyring 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65734 ']' 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65734 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65734 ']' 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65734 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65734 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.371 killing process with pid 65734 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65734' 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65734 00:08:55.371 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65734 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.631 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:08:55.890 00:08:55.890 real 0m19.476s 00:08:55.890 user 1m12.033s 00:08:55.890 sys 0m10.419s 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.890 ************************************ 00:08:55.890 END TEST nvmf_fio_target 00:08:55.890 ************************************ 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.890 ************************************ 00:08:55.890 START TEST nvmf_bdevio 00:08:55.890 ************************************ 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:55.890 * Looking for test storage... 00:08:55.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.890 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:55.891 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.151 --rc genhtml_branch_coverage=1 00:08:56.151 --rc genhtml_function_coverage=1 00:08:56.151 --rc genhtml_legend=1 00:08:56.151 --rc geninfo_all_blocks=1 00:08:56.151 --rc geninfo_unexecuted_blocks=1 00:08:56.151 00:08:56.151 ' 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.151 --rc genhtml_branch_coverage=1 00:08:56.151 --rc genhtml_function_coverage=1 00:08:56.151 --rc genhtml_legend=1 00:08:56.151 --rc geninfo_all_blocks=1 00:08:56.151 --rc geninfo_unexecuted_blocks=1 00:08:56.151 00:08:56.151 ' 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.151 --rc genhtml_branch_coverage=1 00:08:56.151 --rc genhtml_function_coverage=1 00:08:56.151 --rc genhtml_legend=1 00:08:56.151 --rc geninfo_all_blocks=1 00:08:56.151 --rc geninfo_unexecuted_blocks=1 00:08:56.151 00:08:56.151 ' 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.151 --rc genhtml_branch_coverage=1 00:08:56.151 --rc genhtml_function_coverage=1 00:08:56.151 --rc genhtml_legend=1 00:08:56.151 --rc geninfo_all_blocks=1 00:08:56.151 --rc geninfo_unexecuted_blocks=1 00:08:56.151 00:08:56.151 ' 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.151 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:56.152 Cannot find device "nvmf_init_br" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:56.152 Cannot find device "nvmf_init_br2" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:56.152 Cannot find device "nvmf_tgt_br" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.152 Cannot find device "nvmf_tgt_br2" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:56.152 Cannot find device "nvmf_init_br" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:56.152 Cannot find device "nvmf_init_br2" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:56.152 Cannot find device "nvmf_tgt_br" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:56.152 Cannot find device "nvmf_tgt_br2" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:56.152 Cannot find device "nvmf_br" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:56.152 Cannot find device "nvmf_init_if" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:56.152 Cannot find device "nvmf_init_if2" 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:56.152 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:56.153 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:56.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:56.413 00:08:56.413 --- 10.0.0.3 ping statistics --- 00:08:56.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.413 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:56.413 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:56.413 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:08:56.413 00:08:56.413 --- 10.0.0.4 ping statistics --- 00:08:56.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.413 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:56.413 00:08:56.413 --- 10.0.0.1 ping statistics --- 00:08:56.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.413 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:56.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:56.413 00:08:56.413 --- 10.0.0.2 ping statistics --- 00:08:56.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.413 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66487 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66487 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66487 ']' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.413 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.413 [2024-12-06 12:16:43.020246] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:56.413 [2024-12-06 12:16:43.020316] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.673 [2024-12-06 12:16:43.163317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.673 [2024-12-06 12:16:43.197592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.673 [2024-12-06 12:16:43.197649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.673 [2024-12-06 12:16:43.197675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.673 [2024-12-06 12:16:43.197683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.673 [2024-12-06 12:16:43.197688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.673 [2024-12-06 12:16:43.198769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.673 [2024-12-06 12:16:43.198939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:56.673 [2024-12-06 12:16:43.199058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:56.673 [2024-12-06 12:16:43.199060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.673 [2024-12-06 12:16:43.230329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.673 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.933 [2024-12-06 12:16:43.333427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.933 Malloc0 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:56.933 [2024-12-06 12:16:43.384420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:56.933 { 00:08:56.933 "params": { 00:08:56.933 "name": "Nvme$subsystem", 00:08:56.933 "trtype": "$TEST_TRANSPORT", 00:08:56.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.933 "adrfam": "ipv4", 00:08:56.933 "trsvcid": "$NVMF_PORT", 00:08:56.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.933 "hdgst": ${hdgst:-false}, 00:08:56.933 "ddgst": ${ddgst:-false} 00:08:56.933 }, 00:08:56.933 "method": "bdev_nvme_attach_controller" 00:08:56.933 } 00:08:56.933 EOF 00:08:56.933 )") 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:56.933 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:56.933 "params": { 00:08:56.933 "name": "Nvme1", 00:08:56.933 "trtype": "tcp", 00:08:56.933 "traddr": "10.0.0.3", 00:08:56.933 "adrfam": "ipv4", 00:08:56.933 "trsvcid": "4420", 00:08:56.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.933 "hdgst": false, 00:08:56.933 "ddgst": false 00:08:56.933 }, 00:08:56.933 "method": "bdev_nvme_attach_controller" 00:08:56.933 }' 00:08:56.933 [2024-12-06 12:16:43.444059] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:56.933 [2024-12-06 12:16:43.444164] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66521 ] 00:08:57.194 [2024-12-06 12:16:43.599094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.194 [2024-12-06 12:16:43.639934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.194 [2024-12-06 12:16:43.640066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.194 [2024-12-06 12:16:43.640074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.194 [2024-12-06 12:16:43.681649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.194 I/O targets: 00:08:57.194 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:57.194 00:08:57.194 00:08:57.194 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.194 http://cunit.sourceforge.net/ 00:08:57.194 00:08:57.194 00:08:57.194 Suite: bdevio tests on: Nvme1n1 00:08:57.194 Test: blockdev write read block ...passed 00:08:57.194 Test: blockdev write zeroes read block ...passed 00:08:57.194 Test: blockdev write zeroes read no split ...passed 00:08:57.194 Test: blockdev write zeroes read split ...passed 00:08:57.194 Test: blockdev write zeroes read split partial ...passed 00:08:57.194 Test: blockdev reset ...[2024-12-06 12:16:43.816218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:57.194 [2024-12-06 12:16:43.816511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57ab80 (9): Bad file descriptor 00:08:57.194 [2024-12-06 12:16:43.834761] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:57.194 passed 00:08:57.194 Test: blockdev write read 8 blocks ...passed 00:08:57.194 Test: blockdev write read size > 128k ...passed 00:08:57.194 Test: blockdev write read invalid size ...passed 00:08:57.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.194 Test: blockdev write read max offset ...passed 00:08:57.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.194 Test: blockdev writev readv 8 blocks ...passed 00:08:57.194 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.194 Test: blockdev writev readv block ...passed 00:08:57.194 Test: blockdev writev readv size > 128k ...passed 00:08:57.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.194 Test: blockdev comparev and writev ...[2024-12-06 12:16:43.843020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.843069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.843094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.843108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.843800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.843856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.843879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.843892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.844196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.844225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.844811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.844844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:57.194 [2024-12-06 12:16:43.844879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:57.194 passed 00:08:57.194 Test: blockdev nvme passthru rw ...passed 00:08:57.194 Test: blockdev nvme passthru vendor specific ...[2024-12-06 12:16:43.845752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:57.194 [2024-12-06 12:16:43.845789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.845912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:57.194 [2024-12-06 12:16:43.845938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.846062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:57.194 [2024-12-06 12:16:43.846093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:57.194 [2024-12-06 12:16:43.846225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:57.194 [2024-12-06 12:16:43.846258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:57.194 passed 00:08:57.453 Test: blockdev nvme admin passthru ...passed 00:08:57.453 Test: blockdev copy ...passed 00:08:57.453 00:08:57.453 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.453 suites 1 1 n/a 0 0 00:08:57.453 tests 23 23 23 0 0 00:08:57.453 asserts 152 152 152 0 n/a 00:08:57.453 00:08:57.453 Elapsed time = 0.142 seconds 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.453 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.453 rmmod nvme_tcp 00:08:57.453 rmmod nvme_fabrics 00:08:57.453 rmmod nvme_keyring 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66487 ']' 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66487 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66487 ']' 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66487 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.453 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66487 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:57.713 killing process with pid 66487 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66487' 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66487 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66487 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:57.713 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:08:57.972 00:08:57.972 real 0m2.155s 00:08:57.972 user 0m5.457s 00:08:57.972 sys 0m0.731s 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:57.972 ************************************ 00:08:57.972 END TEST nvmf_bdevio 00:08:57.972 ************************************ 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:57.972 00:08:57.972 real 2m26.769s 00:08:57.972 user 6m24.480s 00:08:57.972 sys 0m52.894s 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.972 ************************************ 00:08:57.972 END TEST nvmf_target_core 00:08:57.972 ************************************ 00:08:57.972 12:16:44 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:57.972 12:16:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.972 12:16:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.972 12:16:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.972 ************************************ 00:08:57.972 START TEST nvmf_target_extra 00:08:57.972 ************************************ 00:08:57.972 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:58.232 * Looking for test storage... 00:08:58.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.232 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.232 --rc genhtml_branch_coverage=1 00:08:58.233 --rc genhtml_function_coverage=1 00:08:58.233 --rc genhtml_legend=1 00:08:58.233 --rc geninfo_all_blocks=1 00:08:58.233 --rc geninfo_unexecuted_blocks=1 00:08:58.233 00:08:58.233 ' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.233 --rc genhtml_branch_coverage=1 00:08:58.233 --rc genhtml_function_coverage=1 00:08:58.233 --rc genhtml_legend=1 00:08:58.233 --rc geninfo_all_blocks=1 00:08:58.233 --rc geninfo_unexecuted_blocks=1 00:08:58.233 00:08:58.233 ' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.233 --rc genhtml_branch_coverage=1 00:08:58.233 --rc genhtml_function_coverage=1 00:08:58.233 --rc genhtml_legend=1 00:08:58.233 --rc geninfo_all_blocks=1 00:08:58.233 --rc geninfo_unexecuted_blocks=1 00:08:58.233 00:08:58.233 ' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.233 --rc genhtml_branch_coverage=1 00:08:58.233 --rc genhtml_function_coverage=1 00:08:58.233 --rc genhtml_legend=1 00:08:58.233 --rc geninfo_all_blocks=1 00:08:58.233 --rc geninfo_unexecuted_blocks=1 00:08:58.233 00:08:58.233 ' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:58.233 ************************************ 00:08:58.233 START TEST nvmf_auth_target 00:08:58.233 ************************************ 00:08:58.233 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:58.493 * Looking for test storage... 00:08:58.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.494 --rc genhtml_branch_coverage=1 00:08:58.494 --rc genhtml_function_coverage=1 00:08:58.494 --rc genhtml_legend=1 00:08:58.494 --rc geninfo_all_blocks=1 00:08:58.494 --rc geninfo_unexecuted_blocks=1 00:08:58.494 00:08:58.494 ' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.494 --rc genhtml_branch_coverage=1 00:08:58.494 --rc genhtml_function_coverage=1 00:08:58.494 --rc genhtml_legend=1 00:08:58.494 --rc geninfo_all_blocks=1 00:08:58.494 --rc geninfo_unexecuted_blocks=1 00:08:58.494 00:08:58.494 ' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.494 --rc genhtml_branch_coverage=1 00:08:58.494 --rc genhtml_function_coverage=1 00:08:58.494 --rc genhtml_legend=1 00:08:58.494 --rc geninfo_all_blocks=1 00:08:58.494 --rc geninfo_unexecuted_blocks=1 00:08:58.494 00:08:58.494 ' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.494 --rc genhtml_branch_coverage=1 00:08:58.494 --rc genhtml_function_coverage=1 00:08:58.494 --rc genhtml_legend=1 00:08:58.494 --rc geninfo_all_blocks=1 00:08:58.494 --rc geninfo_unexecuted_blocks=1 00:08:58.494 00:08:58.494 ' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.494 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.495 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:58.495 Cannot find device "nvmf_init_br" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:58.495 Cannot find device "nvmf_init_br2" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:58.495 Cannot find device "nvmf_tgt_br" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.495 Cannot find device "nvmf_tgt_br2" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:58.495 Cannot find device "nvmf_init_br" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:58.495 Cannot find device "nvmf_init_br2" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:58.495 Cannot find device "nvmf_tgt_br" 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:08:58.495 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:58.495 Cannot find device "nvmf_tgt_br2" 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:58.496 Cannot find device "nvmf_br" 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:58.496 Cannot find device "nvmf_init_if" 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:58.496 Cannot find device "nvmf_init_if2" 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:08:58.496 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:58.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:58.754 00:08:58.754 --- 10.0.0.3 ping statistics --- 00:08:58.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.754 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:58.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:58.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:08:58.754 00:08:58.754 --- 10.0.0.4 ping statistics --- 00:08:58.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.754 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:58.754 00:08:58.754 --- 10.0.0.1 ping statistics --- 00:08:58.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.754 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:58.754 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:59.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:08:59.011 00:08:59.011 --- 10.0.0.2 ping statistics --- 00:08:59.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.011 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.011 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66807 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66807 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66807 ']' 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.012 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66827 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c2b38961582f9178595b9c6697b7cd3764ed9dccbc8802c 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WOA 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c2b38961582f9178595b9c6697b7cd3764ed9dccbc8802c 0 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c2b38961582f9178595b9c6697b7cd3764ed9dccbc8802c 0 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c2b38961582f9178595b9c6697b7cd3764ed9dccbc8802c 00:08:59.269 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WOA 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WOA 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.WOA 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d90b8d5280b0b904eac66d118a39df53cff9c5fc7466b068a032815b97091fba 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qId 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d90b8d5280b0b904eac66d118a39df53cff9c5fc7466b068a032815b97091fba 3 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d90b8d5280b0b904eac66d118a39df53cff9c5fc7466b068a032815b97091fba 3 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d90b8d5280b0b904eac66d118a39df53cff9c5fc7466b068a032815b97091fba 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:59.270 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qId 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qId 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.qId 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea73a7b6a0c7cd8a382b55ad9a0acdc4 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NpK 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea73a7b6a0c7cd8a382b55ad9a0acdc4 1 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea73a7b6a0c7cd8a382b55ad9a0acdc4 1 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea73a7b6a0c7cd8a382b55ad9a0acdc4 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:59.528 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NpK 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NpK 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.NpK 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=de2a9656db5c190b49800b90348c3af097c5466cf4617a67 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.no7 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key de2a9656db5c190b49800b90348c3af097c5466cf4617a67 2 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 de2a9656db5c190b49800b90348c3af097c5466cf4617a67 2 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=de2a9656db5c190b49800b90348c3af097c5466cf4617a67 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.no7 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.no7 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.no7 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6fd689c4e5d685bfa84840273c1db35a5bbdf0c0c20f77a3 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kR0 00:08:59.528 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6fd689c4e5d685bfa84840273c1db35a5bbdf0c0c20f77a3 2 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6fd689c4e5d685bfa84840273c1db35a5bbdf0c0c20f77a3 2 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6fd689c4e5d685bfa84840273c1db35a5bbdf0c0c20f77a3 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kR0 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kR0 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.kR0 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6ebffb0fa3650b82b03bcfcfb7a4f8d8 00:08:59.529 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.F2Q 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6ebffb0fa3650b82b03bcfcfb7a4f8d8 1 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6ebffb0fa3650b82b03bcfcfb7a4f8d8 1 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6ebffb0fa3650b82b03bcfcfb7a4f8d8 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.F2Q 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.F2Q 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.F2Q 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1d687b3ebcac04cb144449b7374c4fa11e90c04c20d927cb55f54bfed011d2e6 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vuv 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1d687b3ebcac04cb144449b7374c4fa11e90c04c20d927cb55f54bfed011d2e6 3 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1d687b3ebcac04cb144449b7374c4fa11e90c04c20d927cb55f54bfed011d2e6 3 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1d687b3ebcac04cb144449b7374c4fa11e90c04c20d927cb55f54bfed011d2e6 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vuv 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vuv 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.vuv 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66807 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66807 ']' 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.787 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66827 /var/tmp/host.sock 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66827 ']' 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.044 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WOA 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WOA 00:09:00.302 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WOA 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.qId ]] 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qId 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qId 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qId 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NpK 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NpK 00:09:00.868 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NpK 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.no7 ]] 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.no7 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.no7 00:09:01.127 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.no7 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kR0 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kR0 00:09:01.384 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kR0 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.F2Q ]] 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.F2Q 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.F2Q 00:09:01.642 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.F2Q 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vuv 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.vuv 00:09:01.917 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.vuv 00:09:02.175 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:02.175 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:02.175 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:02.175 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:02.175 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:02.175 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.433 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:02.692 00:09:02.692 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:02.692 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:02.692 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:02.950 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:02.950 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:02.950 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.950 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.950 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.950 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:02.950 { 00:09:02.950 "cntlid": 1, 00:09:02.950 "qid": 0, 00:09:02.950 "state": "enabled", 00:09:02.950 "thread": "nvmf_tgt_poll_group_000", 00:09:02.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:02.950 "listen_address": { 00:09:02.950 "trtype": "TCP", 00:09:02.951 "adrfam": "IPv4", 00:09:02.951 "traddr": "10.0.0.3", 00:09:02.951 "trsvcid": "4420" 00:09:02.951 }, 00:09:02.951 "peer_address": { 00:09:02.951 "trtype": "TCP", 00:09:02.951 "adrfam": "IPv4", 00:09:02.951 "traddr": "10.0.0.1", 00:09:02.951 "trsvcid": "42098" 00:09:02.951 }, 00:09:02.951 "auth": { 00:09:02.951 "state": "completed", 00:09:02.951 "digest": "sha256", 00:09:02.951 "dhgroup": "null" 00:09:02.951 } 00:09:02.951 } 00:09:02.951 ]' 00:09:02.951 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:02.951 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:02.951 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:02.951 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:02.951 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:03.209 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:03.209 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:03.209 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:03.467 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:03.467 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:07.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:07.655 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.655 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.656 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:07.656 00:09:07.914 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:07.914 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:07.914 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:08.173 { 00:09:08.173 "cntlid": 3, 00:09:08.173 "qid": 0, 00:09:08.173 "state": "enabled", 00:09:08.173 "thread": "nvmf_tgt_poll_group_000", 00:09:08.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:08.173 "listen_address": { 00:09:08.173 "trtype": "TCP", 00:09:08.173 "adrfam": "IPv4", 00:09:08.173 "traddr": "10.0.0.3", 00:09:08.173 "trsvcid": "4420" 00:09:08.173 }, 00:09:08.173 "peer_address": { 00:09:08.173 "trtype": "TCP", 00:09:08.173 "adrfam": "IPv4", 00:09:08.173 "traddr": "10.0.0.1", 00:09:08.173 "trsvcid": "48820" 00:09:08.173 }, 00:09:08.173 "auth": { 00:09:08.173 "state": "completed", 00:09:08.173 "digest": "sha256", 00:09:08.173 "dhgroup": "null" 00:09:08.173 } 00:09:08.173 } 00:09:08.173 ]' 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:08.173 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:08.431 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:08.431 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:08.997 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:09.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:09.255 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:09.514 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:09.773 00:09:09.773 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:09.773 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:09.773 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:10.031 { 00:09:10.031 "cntlid": 5, 00:09:10.031 "qid": 0, 00:09:10.031 "state": "enabled", 00:09:10.031 "thread": "nvmf_tgt_poll_group_000", 00:09:10.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:10.031 "listen_address": { 00:09:10.031 "trtype": "TCP", 00:09:10.031 "adrfam": "IPv4", 00:09:10.031 "traddr": "10.0.0.3", 00:09:10.031 "trsvcid": "4420" 00:09:10.031 }, 00:09:10.031 "peer_address": { 00:09:10.031 "trtype": "TCP", 00:09:10.031 "adrfam": "IPv4", 00:09:10.031 "traddr": "10.0.0.1", 00:09:10.031 "trsvcid": "48838" 00:09:10.031 }, 00:09:10.031 "auth": { 00:09:10.031 "state": "completed", 00:09:10.031 "digest": "sha256", 00:09:10.031 "dhgroup": "null" 00:09:10.031 } 00:09:10.031 } 00:09:10.031 ]' 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:10.031 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:10.291 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:10.291 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:10.291 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:10.550 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:10.550 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:11.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:11.117 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:11.375 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:11.634 00:09:11.634 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:11.634 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:11.634 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:11.892 { 00:09:11.892 "cntlid": 7, 00:09:11.892 "qid": 0, 00:09:11.892 "state": "enabled", 00:09:11.892 "thread": "nvmf_tgt_poll_group_000", 00:09:11.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:11.892 "listen_address": { 00:09:11.892 "trtype": "TCP", 00:09:11.892 "adrfam": "IPv4", 00:09:11.892 "traddr": "10.0.0.3", 00:09:11.892 "trsvcid": "4420" 00:09:11.892 }, 00:09:11.892 "peer_address": { 00:09:11.892 "trtype": "TCP", 00:09:11.892 "adrfam": "IPv4", 00:09:11.892 "traddr": "10.0.0.1", 00:09:11.892 "trsvcid": "48866" 00:09:11.892 }, 00:09:11.892 "auth": { 00:09:11.892 "state": "completed", 00:09:11.892 "digest": "sha256", 00:09:11.892 "dhgroup": "null" 00:09:11.892 } 00:09:11.892 } 00:09:11.892 ]' 00:09:11.892 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:12.150 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:12.408 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:12.408 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:12.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:12.974 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:13.232 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:13.798 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:13.798 { 00:09:13.798 "cntlid": 9, 00:09:13.798 "qid": 0, 00:09:13.798 "state": "enabled", 00:09:13.798 "thread": "nvmf_tgt_poll_group_000", 00:09:13.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:13.798 "listen_address": { 00:09:13.798 "trtype": "TCP", 00:09:13.798 "adrfam": "IPv4", 00:09:13.798 "traddr": "10.0.0.3", 00:09:13.798 "trsvcid": "4420" 00:09:13.798 }, 00:09:13.798 "peer_address": { 00:09:13.798 "trtype": "TCP", 00:09:13.798 "adrfam": "IPv4", 00:09:13.798 "traddr": "10.0.0.1", 00:09:13.798 "trsvcid": "48134" 00:09:13.798 }, 00:09:13.798 "auth": { 00:09:13.798 "state": "completed", 00:09:13.798 "digest": "sha256", 00:09:13.798 "dhgroup": "ffdhe2048" 00:09:13.798 } 00:09:13.798 } 00:09:13.798 ]' 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:13.798 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:14.056 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:14.056 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:14.056 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:14.056 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:14.056 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:14.314 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:14.314 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:14.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:14.879 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:14.880 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:15.138 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:15.397 00:09:15.397 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:15.397 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:15.397 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:15.655 { 00:09:15.655 "cntlid": 11, 00:09:15.655 "qid": 0, 00:09:15.655 "state": "enabled", 00:09:15.655 "thread": "nvmf_tgt_poll_group_000", 00:09:15.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:15.655 "listen_address": { 00:09:15.655 "trtype": "TCP", 00:09:15.655 "adrfam": "IPv4", 00:09:15.655 "traddr": "10.0.0.3", 00:09:15.655 "trsvcid": "4420" 00:09:15.655 }, 00:09:15.655 "peer_address": { 00:09:15.655 "trtype": "TCP", 00:09:15.655 "adrfam": "IPv4", 00:09:15.655 "traddr": "10.0.0.1", 00:09:15.655 "trsvcid": "48160" 00:09:15.655 }, 00:09:15.655 "auth": { 00:09:15.655 "state": "completed", 00:09:15.655 "digest": "sha256", 00:09:15.655 "dhgroup": "ffdhe2048" 00:09:15.655 } 00:09:15.655 } 00:09:15.655 ]' 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:15.655 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:15.915 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:15.915 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:15.915 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:15.915 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:15.915 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:16.175 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:16.175 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:16.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:16.744 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:17.003 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:17.261 00:09:17.261 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:17.261 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:17.261 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:17.520 { 00:09:17.520 "cntlid": 13, 00:09:17.520 "qid": 0, 00:09:17.520 "state": "enabled", 00:09:17.520 "thread": "nvmf_tgt_poll_group_000", 00:09:17.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:17.520 "listen_address": { 00:09:17.520 "trtype": "TCP", 00:09:17.520 "adrfam": "IPv4", 00:09:17.520 "traddr": "10.0.0.3", 00:09:17.520 "trsvcid": "4420" 00:09:17.520 }, 00:09:17.520 "peer_address": { 00:09:17.520 "trtype": "TCP", 00:09:17.520 "adrfam": "IPv4", 00:09:17.520 "traddr": "10.0.0.1", 00:09:17.520 "trsvcid": "48178" 00:09:17.520 }, 00:09:17.520 "auth": { 00:09:17.520 "state": "completed", 00:09:17.520 "digest": "sha256", 00:09:17.520 "dhgroup": "ffdhe2048" 00:09:17.520 } 00:09:17.520 } 00:09:17.520 ]' 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:17.520 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:17.779 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:17.779 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:17.779 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:18.038 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:18.038 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:18.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:18.607 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:18.864 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:18.864 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:18.865 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:19.122 00:09:19.122 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:19.122 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:19.122 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:19.380 { 00:09:19.380 "cntlid": 15, 00:09:19.380 "qid": 0, 00:09:19.380 "state": "enabled", 00:09:19.380 "thread": "nvmf_tgt_poll_group_000", 00:09:19.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:19.380 "listen_address": { 00:09:19.380 "trtype": "TCP", 00:09:19.380 "adrfam": "IPv4", 00:09:19.380 "traddr": "10.0.0.3", 00:09:19.380 "trsvcid": "4420" 00:09:19.380 }, 00:09:19.380 "peer_address": { 00:09:19.380 "trtype": "TCP", 00:09:19.380 "adrfam": "IPv4", 00:09:19.380 "traddr": "10.0.0.1", 00:09:19.380 "trsvcid": "48206" 00:09:19.380 }, 00:09:19.380 "auth": { 00:09:19.380 "state": "completed", 00:09:19.380 "digest": "sha256", 00:09:19.380 "dhgroup": "ffdhe2048" 00:09:19.380 } 00:09:19.380 } 00:09:19.380 ]' 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:19.380 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:19.380 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:19.380 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:19.638 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:19.638 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:19.638 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:19.897 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:19.897 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:20.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:20.464 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:20.723 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:21.291 00:09:21.291 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:21.291 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:21.291 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:21.551 { 00:09:21.551 "cntlid": 17, 00:09:21.551 "qid": 0, 00:09:21.551 "state": "enabled", 00:09:21.551 "thread": "nvmf_tgt_poll_group_000", 00:09:21.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:21.551 "listen_address": { 00:09:21.551 "trtype": "TCP", 00:09:21.551 "adrfam": "IPv4", 00:09:21.551 "traddr": "10.0.0.3", 00:09:21.551 "trsvcid": "4420" 00:09:21.551 }, 00:09:21.551 "peer_address": { 00:09:21.551 "trtype": "TCP", 00:09:21.551 "adrfam": "IPv4", 00:09:21.551 "traddr": "10.0.0.1", 00:09:21.551 "trsvcid": "48228" 00:09:21.551 }, 00:09:21.551 "auth": { 00:09:21.551 "state": "completed", 00:09:21.551 "digest": "sha256", 00:09:21.551 "dhgroup": "ffdhe3072" 00:09:21.551 } 00:09:21.551 } 00:09:21.551 ]' 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:21.551 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:21.826 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:21.826 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:22.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:22.776 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:23.342 00:09:23.342 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:23.342 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:23.342 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:23.602 { 00:09:23.602 "cntlid": 19, 00:09:23.602 "qid": 0, 00:09:23.602 "state": "enabled", 00:09:23.602 "thread": "nvmf_tgt_poll_group_000", 00:09:23.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:23.602 "listen_address": { 00:09:23.602 "trtype": "TCP", 00:09:23.602 "adrfam": "IPv4", 00:09:23.602 "traddr": "10.0.0.3", 00:09:23.602 "trsvcid": "4420" 00:09:23.602 }, 00:09:23.602 "peer_address": { 00:09:23.602 "trtype": "TCP", 00:09:23.602 "adrfam": "IPv4", 00:09:23.602 "traddr": "10.0.0.1", 00:09:23.602 "trsvcid": "53784" 00:09:23.602 }, 00:09:23.602 "auth": { 00:09:23.602 "state": "completed", 00:09:23.602 "digest": "sha256", 00:09:23.602 "dhgroup": "ffdhe3072" 00:09:23.602 } 00:09:23.602 } 00:09:23.602 ]' 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:23.602 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:24.171 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:24.171 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:24.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:24.737 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:24.996 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:25.254 00:09:25.254 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:25.254 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:25.254 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:25.512 { 00:09:25.512 "cntlid": 21, 00:09:25.512 "qid": 0, 00:09:25.512 "state": "enabled", 00:09:25.512 "thread": "nvmf_tgt_poll_group_000", 00:09:25.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:25.512 "listen_address": { 00:09:25.512 "trtype": "TCP", 00:09:25.512 "adrfam": "IPv4", 00:09:25.512 "traddr": "10.0.0.3", 00:09:25.512 "trsvcid": "4420" 00:09:25.512 }, 00:09:25.512 "peer_address": { 00:09:25.512 "trtype": "TCP", 00:09:25.512 "adrfam": "IPv4", 00:09:25.512 "traddr": "10.0.0.1", 00:09:25.512 "trsvcid": "53818" 00:09:25.512 }, 00:09:25.512 "auth": { 00:09:25.512 "state": "completed", 00:09:25.512 "digest": "sha256", 00:09:25.512 "dhgroup": "ffdhe3072" 00:09:25.512 } 00:09:25.512 } 00:09:25.512 ]' 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:25.512 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:25.771 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:25.771 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:25.771 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:26.028 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:26.028 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:26.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:26.594 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:26.853 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:27.112 00:09:27.112 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:27.112 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:27.112 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:27.371 { 00:09:27.371 "cntlid": 23, 00:09:27.371 "qid": 0, 00:09:27.371 "state": "enabled", 00:09:27.371 "thread": "nvmf_tgt_poll_group_000", 00:09:27.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:27.371 "listen_address": { 00:09:27.371 "trtype": "TCP", 00:09:27.371 "adrfam": "IPv4", 00:09:27.371 "traddr": "10.0.0.3", 00:09:27.371 "trsvcid": "4420" 00:09:27.371 }, 00:09:27.371 "peer_address": { 00:09:27.371 "trtype": "TCP", 00:09:27.371 "adrfam": "IPv4", 00:09:27.371 "traddr": "10.0.0.1", 00:09:27.371 "trsvcid": "53852" 00:09:27.371 }, 00:09:27.371 "auth": { 00:09:27.371 "state": "completed", 00:09:27.371 "digest": "sha256", 00:09:27.371 "dhgroup": "ffdhe3072" 00:09:27.371 } 00:09:27.371 } 00:09:27.371 ]' 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:27.371 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:27.630 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:27.630 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:27.630 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:27.630 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:27.630 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:27.889 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:27.889 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:28.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:28.456 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:28.714 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:28.971 00:09:28.971 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:28.971 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:28.971 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:29.228 { 00:09:29.228 "cntlid": 25, 00:09:29.228 "qid": 0, 00:09:29.228 "state": "enabled", 00:09:29.228 "thread": "nvmf_tgt_poll_group_000", 00:09:29.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:29.228 "listen_address": { 00:09:29.228 "trtype": "TCP", 00:09:29.228 "adrfam": "IPv4", 00:09:29.228 "traddr": "10.0.0.3", 00:09:29.228 "trsvcid": "4420" 00:09:29.228 }, 00:09:29.228 "peer_address": { 00:09:29.228 "trtype": "TCP", 00:09:29.228 "adrfam": "IPv4", 00:09:29.228 "traddr": "10.0.0.1", 00:09:29.228 "trsvcid": "53872" 00:09:29.228 }, 00:09:29.228 "auth": { 00:09:29.228 "state": "completed", 00:09:29.228 "digest": "sha256", 00:09:29.228 "dhgroup": "ffdhe4096" 00:09:29.228 } 00:09:29.228 } 00:09:29.228 ]' 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:29.228 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:29.486 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:29.486 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:29.486 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:29.486 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:29.486 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:29.744 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:29.744 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:30.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:30.313 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.573 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.832 00:09:30.832 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:30.832 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:30.832 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:31.092 { 00:09:31.092 "cntlid": 27, 00:09:31.092 "qid": 0, 00:09:31.092 "state": "enabled", 00:09:31.092 "thread": "nvmf_tgt_poll_group_000", 00:09:31.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:31.092 "listen_address": { 00:09:31.092 "trtype": "TCP", 00:09:31.092 "adrfam": "IPv4", 00:09:31.092 "traddr": "10.0.0.3", 00:09:31.092 "trsvcid": "4420" 00:09:31.092 }, 00:09:31.092 "peer_address": { 00:09:31.092 "trtype": "TCP", 00:09:31.092 "adrfam": "IPv4", 00:09:31.092 "traddr": "10.0.0.1", 00:09:31.092 "trsvcid": "53890" 00:09:31.092 }, 00:09:31.092 "auth": { 00:09:31.092 "state": "completed", 00:09:31.092 "digest": "sha256", 00:09:31.092 "dhgroup": "ffdhe4096" 00:09:31.092 } 00:09:31.092 } 00:09:31.092 ]' 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:31.092 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:31.352 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:31.352 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:31.352 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:31.352 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:31.352 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:31.611 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:31.611 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:32.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:32.179 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.438 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:33.006 00:09:33.006 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:33.006 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:33.006 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:33.264 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:33.265 { 00:09:33.265 "cntlid": 29, 00:09:33.265 "qid": 0, 00:09:33.265 "state": "enabled", 00:09:33.265 "thread": "nvmf_tgt_poll_group_000", 00:09:33.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:33.265 "listen_address": { 00:09:33.265 "trtype": "TCP", 00:09:33.265 "adrfam": "IPv4", 00:09:33.265 "traddr": "10.0.0.3", 00:09:33.265 "trsvcid": "4420" 00:09:33.265 }, 00:09:33.265 "peer_address": { 00:09:33.265 "trtype": "TCP", 00:09:33.265 "adrfam": "IPv4", 00:09:33.265 "traddr": "10.0.0.1", 00:09:33.265 "trsvcid": "53906" 00:09:33.265 }, 00:09:33.265 "auth": { 00:09:33.265 "state": "completed", 00:09:33.265 "digest": "sha256", 00:09:33.265 "dhgroup": "ffdhe4096" 00:09:33.265 } 00:09:33.265 } 00:09:33.265 ]' 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.265 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:33.523 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:33.523 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:34.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:34.089 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:34.346 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:34.604 00:09:34.604 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:34.604 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:34.604 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:35.170 { 00:09:35.170 "cntlid": 31, 00:09:35.170 "qid": 0, 00:09:35.170 "state": "enabled", 00:09:35.170 "thread": "nvmf_tgt_poll_group_000", 00:09:35.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:35.170 "listen_address": { 00:09:35.170 "trtype": "TCP", 00:09:35.170 "adrfam": "IPv4", 00:09:35.170 "traddr": "10.0.0.3", 00:09:35.170 "trsvcid": "4420" 00:09:35.170 }, 00:09:35.170 "peer_address": { 00:09:35.170 "trtype": "TCP", 00:09:35.170 "adrfam": "IPv4", 00:09:35.170 "traddr": "10.0.0.1", 00:09:35.170 "trsvcid": "46778" 00:09:35.170 }, 00:09:35.170 "auth": { 00:09:35.170 "state": "completed", 00:09:35.170 "digest": "sha256", 00:09:35.170 "dhgroup": "ffdhe4096" 00:09:35.170 } 00:09:35.170 } 00:09:35.170 ]' 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:35.170 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:35.427 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:35.427 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:35.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:35.993 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.252 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.510 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.511 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.511 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.511 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.769 00:09:36.769 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:36.769 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:36.769 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:37.028 { 00:09:37.028 "cntlid": 33, 00:09:37.028 "qid": 0, 00:09:37.028 "state": "enabled", 00:09:37.028 "thread": "nvmf_tgt_poll_group_000", 00:09:37.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:37.028 "listen_address": { 00:09:37.028 "trtype": "TCP", 00:09:37.028 "adrfam": "IPv4", 00:09:37.028 "traddr": "10.0.0.3", 00:09:37.028 "trsvcid": "4420" 00:09:37.028 }, 00:09:37.028 "peer_address": { 00:09:37.028 "trtype": "TCP", 00:09:37.028 "adrfam": "IPv4", 00:09:37.028 "traddr": "10.0.0.1", 00:09:37.028 "trsvcid": "46798" 00:09:37.028 }, 00:09:37.028 "auth": { 00:09:37.028 "state": "completed", 00:09:37.028 "digest": "sha256", 00:09:37.028 "dhgroup": "ffdhe6144" 00:09:37.028 } 00:09:37.028 } 00:09:37.028 ]' 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:37.028 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:37.287 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:37.287 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:37.287 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:37.546 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:37.546 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:38.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:38.113 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:38.372 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:09:38.372 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:38.372 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:38.372 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:38.372 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:38.372 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.373 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.642 00:09:38.642 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:38.642 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.642 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:38.902 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.161 { 00:09:39.161 "cntlid": 35, 00:09:39.161 "qid": 0, 00:09:39.161 "state": "enabled", 00:09:39.161 "thread": "nvmf_tgt_poll_group_000", 00:09:39.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:39.161 "listen_address": { 00:09:39.161 "trtype": "TCP", 00:09:39.161 "adrfam": "IPv4", 00:09:39.161 "traddr": "10.0.0.3", 00:09:39.161 "trsvcid": "4420" 00:09:39.161 }, 00:09:39.161 "peer_address": { 00:09:39.161 "trtype": "TCP", 00:09:39.161 "adrfam": "IPv4", 00:09:39.161 "traddr": "10.0.0.1", 00:09:39.161 "trsvcid": "46838" 00:09:39.161 }, 00:09:39.161 "auth": { 00:09:39.161 "state": "completed", 00:09:39.161 "digest": "sha256", 00:09:39.161 "dhgroup": "ffdhe6144" 00:09:39.161 } 00:09:39.161 } 00:09:39.161 ]' 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.161 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:39.420 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:39.420 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:39.988 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.247 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.816 00:09:40.816 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.816 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.816 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.075 { 00:09:41.075 "cntlid": 37, 00:09:41.075 "qid": 0, 00:09:41.075 "state": "enabled", 00:09:41.075 "thread": "nvmf_tgt_poll_group_000", 00:09:41.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:41.075 "listen_address": { 00:09:41.075 "trtype": "TCP", 00:09:41.075 "adrfam": "IPv4", 00:09:41.075 "traddr": "10.0.0.3", 00:09:41.075 "trsvcid": "4420" 00:09:41.075 }, 00:09:41.075 "peer_address": { 00:09:41.075 "trtype": "TCP", 00:09:41.075 "adrfam": "IPv4", 00:09:41.075 "traddr": "10.0.0.1", 00:09:41.075 "trsvcid": "46852" 00:09:41.075 }, 00:09:41.075 "auth": { 00:09:41.075 "state": "completed", 00:09:41.075 "digest": "sha256", 00:09:41.075 "dhgroup": "ffdhe6144" 00:09:41.075 } 00:09:41.075 } 00:09:41.075 ]' 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.075 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.644 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:41.644 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.212 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:42.213 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:42.472 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:43.041 00:09:43.041 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:43.041 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:43.041 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:43.301 { 00:09:43.301 "cntlid": 39, 00:09:43.301 "qid": 0, 00:09:43.301 "state": "enabled", 00:09:43.301 "thread": "nvmf_tgt_poll_group_000", 00:09:43.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:43.301 "listen_address": { 00:09:43.301 "trtype": "TCP", 00:09:43.301 "adrfam": "IPv4", 00:09:43.301 "traddr": "10.0.0.3", 00:09:43.301 "trsvcid": "4420" 00:09:43.301 }, 00:09:43.301 "peer_address": { 00:09:43.301 "trtype": "TCP", 00:09:43.301 "adrfam": "IPv4", 00:09:43.301 "traddr": "10.0.0.1", 00:09:43.301 "trsvcid": "46888" 00:09:43.301 }, 00:09:43.301 "auth": { 00:09:43.301 "state": "completed", 00:09:43.301 "digest": "sha256", 00:09:43.301 "dhgroup": "ffdhe6144" 00:09:43.301 } 00:09:43.301 } 00:09:43.301 ]' 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.301 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.560 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:43.560 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:44.128 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:44.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:44.128 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:44.128 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.129 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.129 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.129 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:44.129 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:44.129 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:44.129 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.388 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.955 00:09:44.955 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.955 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.955 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:45.213 { 00:09:45.213 "cntlid": 41, 00:09:45.213 "qid": 0, 00:09:45.213 "state": "enabled", 00:09:45.213 "thread": "nvmf_tgt_poll_group_000", 00:09:45.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:45.213 "listen_address": { 00:09:45.213 "trtype": "TCP", 00:09:45.213 "adrfam": "IPv4", 00:09:45.213 "traddr": "10.0.0.3", 00:09:45.213 "trsvcid": "4420" 00:09:45.213 }, 00:09:45.213 "peer_address": { 00:09:45.213 "trtype": "TCP", 00:09:45.213 "adrfam": "IPv4", 00:09:45.213 "traddr": "10.0.0.1", 00:09:45.213 "trsvcid": "56232" 00:09:45.213 }, 00:09:45.213 "auth": { 00:09:45.213 "state": "completed", 00:09:45.213 "digest": "sha256", 00:09:45.213 "dhgroup": "ffdhe8192" 00:09:45.213 } 00:09:45.213 } 00:09:45.213 ]' 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:45.213 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:45.472 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:45.472 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:45.472 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:45.472 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:45.472 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.731 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:45.731 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:46.299 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:46.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:46.299 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:46.299 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.299 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.300 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.300 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:46.300 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:46.300 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.558 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.124 00:09:47.124 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.124 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.124 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.382 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:47.641 { 00:09:47.641 "cntlid": 43, 00:09:47.641 "qid": 0, 00:09:47.641 "state": "enabled", 00:09:47.641 "thread": "nvmf_tgt_poll_group_000", 00:09:47.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:47.641 "listen_address": { 00:09:47.641 "trtype": "TCP", 00:09:47.641 "adrfam": "IPv4", 00:09:47.641 "traddr": "10.0.0.3", 00:09:47.641 "trsvcid": "4420" 00:09:47.641 }, 00:09:47.641 "peer_address": { 00:09:47.641 "trtype": "TCP", 00:09:47.641 "adrfam": "IPv4", 00:09:47.641 "traddr": "10.0.0.1", 00:09:47.641 "trsvcid": "56270" 00:09:47.641 }, 00:09:47.641 "auth": { 00:09:47.641 "state": "completed", 00:09:47.641 "digest": "sha256", 00:09:47.641 "dhgroup": "ffdhe8192" 00:09:47.641 } 00:09:47.641 } 00:09:47.641 ]' 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:47.641 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:47.898 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:47.898 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:48.490 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.801 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.802 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.802 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.802 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.369 00:09:49.369 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.369 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.369 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.628 { 00:09:49.628 "cntlid": 45, 00:09:49.628 "qid": 0, 00:09:49.628 "state": "enabled", 00:09:49.628 "thread": "nvmf_tgt_poll_group_000", 00:09:49.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:49.628 "listen_address": { 00:09:49.628 "trtype": "TCP", 00:09:49.628 "adrfam": "IPv4", 00:09:49.628 "traddr": "10.0.0.3", 00:09:49.628 "trsvcid": "4420" 00:09:49.628 }, 00:09:49.628 "peer_address": { 00:09:49.628 "trtype": "TCP", 00:09:49.628 "adrfam": "IPv4", 00:09:49.628 "traddr": "10.0.0.1", 00:09:49.628 "trsvcid": "56310" 00:09:49.628 }, 00:09:49.628 "auth": { 00:09:49.628 "state": "completed", 00:09:49.628 "digest": "sha256", 00:09:49.628 "dhgroup": "ffdhe8192" 00:09:49.628 } 00:09:49.628 } 00:09:49.628 ]' 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.628 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:49.629 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:49.629 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.629 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.629 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.196 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:50.196 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:50.764 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:51.022 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:51.589 00:09:51.589 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:51.589 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:51.589 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:51.848 { 00:09:51.848 "cntlid": 47, 00:09:51.848 "qid": 0, 00:09:51.848 "state": "enabled", 00:09:51.848 "thread": "nvmf_tgt_poll_group_000", 00:09:51.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:51.848 "listen_address": { 00:09:51.848 "trtype": "TCP", 00:09:51.848 "adrfam": "IPv4", 00:09:51.848 "traddr": "10.0.0.3", 00:09:51.848 "trsvcid": "4420" 00:09:51.848 }, 00:09:51.848 "peer_address": { 00:09:51.848 "trtype": "TCP", 00:09:51.848 "adrfam": "IPv4", 00:09:51.848 "traddr": "10.0.0.1", 00:09:51.848 "trsvcid": "56346" 00:09:51.848 }, 00:09:51.848 "auth": { 00:09:51.848 "state": "completed", 00:09:51.848 "digest": "sha256", 00:09:51.848 "dhgroup": "ffdhe8192" 00:09:51.848 } 00:09:51.848 } 00:09:51.848 ]' 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:51.848 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.108 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:52.108 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.108 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.108 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.108 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.367 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:52.367 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:52.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:52.935 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.195 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.455 00:09:53.455 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:53.455 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:53.455 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:53.714 { 00:09:53.714 "cntlid": 49, 00:09:53.714 "qid": 0, 00:09:53.714 "state": "enabled", 00:09:53.714 "thread": "nvmf_tgt_poll_group_000", 00:09:53.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:53.714 "listen_address": { 00:09:53.714 "trtype": "TCP", 00:09:53.714 "adrfam": "IPv4", 00:09:53.714 "traddr": "10.0.0.3", 00:09:53.714 "trsvcid": "4420" 00:09:53.714 }, 00:09:53.714 "peer_address": { 00:09:53.714 "trtype": "TCP", 00:09:53.714 "adrfam": "IPv4", 00:09:53.714 "traddr": "10.0.0.1", 00:09:53.714 "trsvcid": "33776" 00:09:53.714 }, 00:09:53.714 "auth": { 00:09:53.714 "state": "completed", 00:09:53.714 "digest": "sha384", 00:09:53.714 "dhgroup": "null" 00:09:53.714 } 00:09:53.714 } 00:09:53.714 ]' 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:53.714 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.281 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:54.281 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:09:54.848 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:54.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:54.848 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:54.848 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.848 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.849 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.849 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:54.849 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:54.849 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.107 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.364 00:09:55.364 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.364 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.364 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.622 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.622 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.622 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.622 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.880 { 00:09:55.880 "cntlid": 51, 00:09:55.880 "qid": 0, 00:09:55.880 "state": "enabled", 00:09:55.880 "thread": "nvmf_tgt_poll_group_000", 00:09:55.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:55.880 "listen_address": { 00:09:55.880 "trtype": "TCP", 00:09:55.880 "adrfam": "IPv4", 00:09:55.880 "traddr": "10.0.0.3", 00:09:55.880 "trsvcid": "4420" 00:09:55.880 }, 00:09:55.880 "peer_address": { 00:09:55.880 "trtype": "TCP", 00:09:55.880 "adrfam": "IPv4", 00:09:55.880 "traddr": "10.0.0.1", 00:09:55.880 "trsvcid": "33802" 00:09:55.880 }, 00:09:55.880 "auth": { 00:09:55.880 "state": "completed", 00:09:55.880 "digest": "sha384", 00:09:55.880 "dhgroup": "null" 00:09:55.880 } 00:09:55.880 } 00:09:55.880 ]' 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.880 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.137 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:56.137 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:56.702 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.960 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.528 00:09:57.528 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.528 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.528 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.528 { 00:09:57.528 "cntlid": 53, 00:09:57.528 "qid": 0, 00:09:57.528 "state": "enabled", 00:09:57.528 "thread": "nvmf_tgt_poll_group_000", 00:09:57.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:57.528 "listen_address": { 00:09:57.528 "trtype": "TCP", 00:09:57.528 "adrfam": "IPv4", 00:09:57.528 "traddr": "10.0.0.3", 00:09:57.528 "trsvcid": "4420" 00:09:57.528 }, 00:09:57.528 "peer_address": { 00:09:57.528 "trtype": "TCP", 00:09:57.528 "adrfam": "IPv4", 00:09:57.528 "traddr": "10.0.0.1", 00:09:57.528 "trsvcid": "33836" 00:09:57.528 }, 00:09:57.528 "auth": { 00:09:57.528 "state": "completed", 00:09:57.528 "digest": "sha384", 00:09:57.528 "dhgroup": "null" 00:09:57.528 } 00:09:57.528 } 00:09:57.528 ]' 00:09:57.528 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.787 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.046 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:58.046 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:58.981 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:59.238 00:09:59.238 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.238 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.238 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.754 { 00:09:59.754 "cntlid": 55, 00:09:59.754 "qid": 0, 00:09:59.754 "state": "enabled", 00:09:59.754 "thread": "nvmf_tgt_poll_group_000", 00:09:59.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:09:59.754 "listen_address": { 00:09:59.754 "trtype": "TCP", 00:09:59.754 "adrfam": "IPv4", 00:09:59.754 "traddr": "10.0.0.3", 00:09:59.754 "trsvcid": "4420" 00:09:59.754 }, 00:09:59.754 "peer_address": { 00:09:59.754 "trtype": "TCP", 00:09:59.754 "adrfam": "IPv4", 00:09:59.754 "traddr": "10.0.0.1", 00:09:59.754 "trsvcid": "33876" 00:09:59.754 }, 00:09:59.754 "auth": { 00:09:59.754 "state": "completed", 00:09:59.754 "digest": "sha384", 00:09:59.754 "dhgroup": "null" 00:09:59.754 } 00:09:59.754 } 00:09:59.754 ]' 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.754 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.011 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:00.011 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.946 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.204 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.204 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.204 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.205 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.463 00:10:01.463 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.463 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.463 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.722 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.722 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.722 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.722 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.722 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.722 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.722 { 00:10:01.722 "cntlid": 57, 00:10:01.722 "qid": 0, 00:10:01.722 "state": "enabled", 00:10:01.722 "thread": "nvmf_tgt_poll_group_000", 00:10:01.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:01.722 "listen_address": { 00:10:01.722 "trtype": "TCP", 00:10:01.722 "adrfam": "IPv4", 00:10:01.722 "traddr": "10.0.0.3", 00:10:01.722 "trsvcid": "4420" 00:10:01.722 }, 00:10:01.722 "peer_address": { 00:10:01.722 "trtype": "TCP", 00:10:01.723 "adrfam": "IPv4", 00:10:01.723 "traddr": "10.0.0.1", 00:10:01.723 "trsvcid": "33896" 00:10:01.723 }, 00:10:01.723 "auth": { 00:10:01.723 "state": "completed", 00:10:01.723 "digest": "sha384", 00:10:01.723 "dhgroup": "ffdhe2048" 00:10:01.723 } 00:10:01.723 } 00:10:01.723 ]' 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.723 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.982 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:01.982 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:02.551 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.810 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.379 00:10:03.379 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.379 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.379 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.638 { 00:10:03.638 "cntlid": 59, 00:10:03.638 "qid": 0, 00:10:03.638 "state": "enabled", 00:10:03.638 "thread": "nvmf_tgt_poll_group_000", 00:10:03.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:03.638 "listen_address": { 00:10:03.638 "trtype": "TCP", 00:10:03.638 "adrfam": "IPv4", 00:10:03.638 "traddr": "10.0.0.3", 00:10:03.638 "trsvcid": "4420" 00:10:03.638 }, 00:10:03.638 "peer_address": { 00:10:03.638 "trtype": "TCP", 00:10:03.638 "adrfam": "IPv4", 00:10:03.638 "traddr": "10.0.0.1", 00:10:03.638 "trsvcid": "42154" 00:10:03.638 }, 00:10:03.638 "auth": { 00:10:03.638 "state": "completed", 00:10:03.638 "digest": "sha384", 00:10:03.638 "dhgroup": "ffdhe2048" 00:10:03.638 } 00:10:03.638 } 00:10:03.638 ]' 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.638 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.898 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:03.898 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.835 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.403 00:10:05.403 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.403 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.403 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.662 { 00:10:05.662 "cntlid": 61, 00:10:05.662 "qid": 0, 00:10:05.662 "state": "enabled", 00:10:05.662 "thread": "nvmf_tgt_poll_group_000", 00:10:05.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:05.662 "listen_address": { 00:10:05.662 "trtype": "TCP", 00:10:05.662 "adrfam": "IPv4", 00:10:05.662 "traddr": "10.0.0.3", 00:10:05.662 "trsvcid": "4420" 00:10:05.662 }, 00:10:05.662 "peer_address": { 00:10:05.662 "trtype": "TCP", 00:10:05.662 "adrfam": "IPv4", 00:10:05.662 "traddr": "10.0.0.1", 00:10:05.662 "trsvcid": "42188" 00:10:05.662 }, 00:10:05.662 "auth": { 00:10:05.662 "state": "completed", 00:10:05.662 "digest": "sha384", 00:10:05.662 "dhgroup": "ffdhe2048" 00:10:05.662 } 00:10:05.662 } 00:10:05.662 ]' 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.662 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.921 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:05.921 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:06.488 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:06.489 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.057 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:07.058 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.058 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.317 00:10:07.317 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.317 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.317 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.575 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.575 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.575 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.575 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.575 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.575 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.575 { 00:10:07.575 "cntlid": 63, 00:10:07.575 "qid": 0, 00:10:07.575 "state": "enabled", 00:10:07.575 "thread": "nvmf_tgt_poll_group_000", 00:10:07.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:07.575 "listen_address": { 00:10:07.575 "trtype": "TCP", 00:10:07.575 "adrfam": "IPv4", 00:10:07.575 "traddr": "10.0.0.3", 00:10:07.575 "trsvcid": "4420" 00:10:07.575 }, 00:10:07.576 "peer_address": { 00:10:07.576 "trtype": "TCP", 00:10:07.576 "adrfam": "IPv4", 00:10:07.576 "traddr": "10.0.0.1", 00:10:07.576 "trsvcid": "42210" 00:10:07.576 }, 00:10:07.576 "auth": { 00:10:07.576 "state": "completed", 00:10:07.576 "digest": "sha384", 00:10:07.576 "dhgroup": "ffdhe2048" 00:10:07.576 } 00:10:07.576 } 00:10:07.576 ]' 00:10:07.576 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.576 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:07.576 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.576 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:07.576 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.835 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.835 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.835 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.835 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:07.835 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:08.772 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.031 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.290 00:10:09.290 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.290 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.290 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.548 { 00:10:09.548 "cntlid": 65, 00:10:09.548 "qid": 0, 00:10:09.548 "state": "enabled", 00:10:09.548 "thread": "nvmf_tgt_poll_group_000", 00:10:09.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:09.548 "listen_address": { 00:10:09.548 "trtype": "TCP", 00:10:09.548 "adrfam": "IPv4", 00:10:09.548 "traddr": "10.0.0.3", 00:10:09.548 "trsvcid": "4420" 00:10:09.548 }, 00:10:09.548 "peer_address": { 00:10:09.548 "trtype": "TCP", 00:10:09.548 "adrfam": "IPv4", 00:10:09.548 "traddr": "10.0.0.1", 00:10:09.548 "trsvcid": "42232" 00:10:09.548 }, 00:10:09.548 "auth": { 00:10:09.548 "state": "completed", 00:10:09.548 "digest": "sha384", 00:10:09.548 "dhgroup": "ffdhe3072" 00:10:09.548 } 00:10:09.548 } 00:10:09.548 ]' 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:09.548 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.807 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.807 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.807 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.065 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:10.065 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:10.631 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.890 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.460 00:10:11.460 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.460 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.460 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.460 { 00:10:11.460 "cntlid": 67, 00:10:11.460 "qid": 0, 00:10:11.460 "state": "enabled", 00:10:11.460 "thread": "nvmf_tgt_poll_group_000", 00:10:11.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:11.460 "listen_address": { 00:10:11.460 "trtype": "TCP", 00:10:11.460 "adrfam": "IPv4", 00:10:11.460 "traddr": "10.0.0.3", 00:10:11.460 "trsvcid": "4420" 00:10:11.460 }, 00:10:11.460 "peer_address": { 00:10:11.460 "trtype": "TCP", 00:10:11.460 "adrfam": "IPv4", 00:10:11.460 "traddr": "10.0.0.1", 00:10:11.460 "trsvcid": "42258" 00:10:11.460 }, 00:10:11.460 "auth": { 00:10:11.460 "state": "completed", 00:10:11.460 "digest": "sha384", 00:10:11.460 "dhgroup": "ffdhe3072" 00:10:11.460 } 00:10:11.460 } 00:10:11.460 ]' 00:10:11.460 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.720 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.979 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:11.979 12:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:12.549 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.808 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.375 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.375 { 00:10:13.375 "cntlid": 69, 00:10:13.375 "qid": 0, 00:10:13.375 "state": "enabled", 00:10:13.375 "thread": "nvmf_tgt_poll_group_000", 00:10:13.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:13.375 "listen_address": { 00:10:13.375 "trtype": "TCP", 00:10:13.375 "adrfam": "IPv4", 00:10:13.375 "traddr": "10.0.0.3", 00:10:13.375 "trsvcid": "4420" 00:10:13.375 }, 00:10:13.375 "peer_address": { 00:10:13.375 "trtype": "TCP", 00:10:13.375 "adrfam": "IPv4", 00:10:13.375 "traddr": "10.0.0.1", 00:10:13.375 "trsvcid": "47662" 00:10:13.375 }, 00:10:13.375 "auth": { 00:10:13.375 "state": "completed", 00:10:13.375 "digest": "sha384", 00:10:13.375 "dhgroup": "ffdhe3072" 00:10:13.375 } 00:10:13.375 } 00:10:13.375 ]' 00:10:13.375 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.634 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.892 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:13.892 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:14.458 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.716 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.355 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.355 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.640 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.640 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.640 { 00:10:15.640 "cntlid": 71, 00:10:15.640 "qid": 0, 00:10:15.640 "state": "enabled", 00:10:15.640 "thread": "nvmf_tgt_poll_group_000", 00:10:15.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:15.640 "listen_address": { 00:10:15.640 "trtype": "TCP", 00:10:15.640 "adrfam": "IPv4", 00:10:15.640 "traddr": "10.0.0.3", 00:10:15.640 "trsvcid": "4420" 00:10:15.640 }, 00:10:15.640 "peer_address": { 00:10:15.640 "trtype": "TCP", 00:10:15.640 "adrfam": "IPv4", 00:10:15.640 "traddr": "10.0.0.1", 00:10:15.640 "trsvcid": "47670" 00:10:15.640 }, 00:10:15.640 "auth": { 00:10:15.640 "state": "completed", 00:10:15.640 "digest": "sha384", 00:10:15.640 "dhgroup": "ffdhe3072" 00:10:15.640 } 00:10:15.640 } 00:10:15.640 ]' 00:10:15.640 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.640 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.898 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:15.898 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:16.463 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.463 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:16.463 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.463 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.463 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.463 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.463 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.463 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:16.463 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.721 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.286 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.286 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.286 { 00:10:17.286 "cntlid": 73, 00:10:17.286 "qid": 0, 00:10:17.286 "state": "enabled", 00:10:17.286 "thread": "nvmf_tgt_poll_group_000", 00:10:17.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:17.286 "listen_address": { 00:10:17.286 "trtype": "TCP", 00:10:17.286 "adrfam": "IPv4", 00:10:17.286 "traddr": "10.0.0.3", 00:10:17.286 "trsvcid": "4420" 00:10:17.286 }, 00:10:17.286 "peer_address": { 00:10:17.286 "trtype": "TCP", 00:10:17.287 "adrfam": "IPv4", 00:10:17.287 "traddr": "10.0.0.1", 00:10:17.287 "trsvcid": "47702" 00:10:17.287 }, 00:10:17.287 "auth": { 00:10:17.287 "state": "completed", 00:10:17.287 "digest": "sha384", 00:10:17.287 "dhgroup": "ffdhe4096" 00:10:17.287 } 00:10:17.287 } 00:10:17.287 ]' 00:10:17.287 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.545 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:17.545 12:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.545 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:17.546 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.546 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.546 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.546 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.805 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:17.805 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:18.374 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.632 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.889 00:10:18.889 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.889 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.889 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.456 { 00:10:19.456 "cntlid": 75, 00:10:19.456 "qid": 0, 00:10:19.456 "state": "enabled", 00:10:19.456 "thread": "nvmf_tgt_poll_group_000", 00:10:19.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:19.456 "listen_address": { 00:10:19.456 "trtype": "TCP", 00:10:19.456 "adrfam": "IPv4", 00:10:19.456 "traddr": "10.0.0.3", 00:10:19.456 "trsvcid": "4420" 00:10:19.456 }, 00:10:19.456 "peer_address": { 00:10:19.456 "trtype": "TCP", 00:10:19.456 "adrfam": "IPv4", 00:10:19.456 "traddr": "10.0.0.1", 00:10:19.456 "trsvcid": "47746" 00:10:19.456 }, 00:10:19.456 "auth": { 00:10:19.456 "state": "completed", 00:10:19.456 "digest": "sha384", 00:10:19.456 "dhgroup": "ffdhe4096" 00:10:19.456 } 00:10:19.456 } 00:10:19.456 ]' 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.456 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.715 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:19.716 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:20.285 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.544 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.112 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.112 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.372 { 00:10:21.372 "cntlid": 77, 00:10:21.372 "qid": 0, 00:10:21.372 "state": "enabled", 00:10:21.372 "thread": "nvmf_tgt_poll_group_000", 00:10:21.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:21.372 "listen_address": { 00:10:21.372 "trtype": "TCP", 00:10:21.372 "adrfam": "IPv4", 00:10:21.372 "traddr": "10.0.0.3", 00:10:21.372 "trsvcid": "4420" 00:10:21.372 }, 00:10:21.372 "peer_address": { 00:10:21.372 "trtype": "TCP", 00:10:21.372 "adrfam": "IPv4", 00:10:21.372 "traddr": "10.0.0.1", 00:10:21.372 "trsvcid": "47788" 00:10:21.372 }, 00:10:21.372 "auth": { 00:10:21.372 "state": "completed", 00:10:21.372 "digest": "sha384", 00:10:21.372 "dhgroup": "ffdhe4096" 00:10:21.372 } 00:10:21.372 } 00:10:21.372 ]' 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.372 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.632 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:21.632 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:22.200 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.458 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.459 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:22.459 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:22.459 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:22.717 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.976 { 00:10:22.976 "cntlid": 79, 00:10:22.976 "qid": 0, 00:10:22.976 "state": "enabled", 00:10:22.976 "thread": "nvmf_tgt_poll_group_000", 00:10:22.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:22.976 "listen_address": { 00:10:22.976 "trtype": "TCP", 00:10:22.976 "adrfam": "IPv4", 00:10:22.976 "traddr": "10.0.0.3", 00:10:22.976 "trsvcid": "4420" 00:10:22.976 }, 00:10:22.976 "peer_address": { 00:10:22.976 "trtype": "TCP", 00:10:22.976 "adrfam": "IPv4", 00:10:22.976 "traddr": "10.0.0.1", 00:10:22.976 "trsvcid": "47814" 00:10:22.976 }, 00:10:22.976 "auth": { 00:10:22.976 "state": "completed", 00:10:22.976 "digest": "sha384", 00:10:22.976 "dhgroup": "ffdhe4096" 00:10:22.976 } 00:10:22.976 } 00:10:22.976 ]' 00:10:22.976 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.235 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.495 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:23.495 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:24.063 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.324 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.893 00:10:24.893 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.893 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.893 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.153 { 00:10:25.153 "cntlid": 81, 00:10:25.153 "qid": 0, 00:10:25.153 "state": "enabled", 00:10:25.153 "thread": "nvmf_tgt_poll_group_000", 00:10:25.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:25.153 "listen_address": { 00:10:25.153 "trtype": "TCP", 00:10:25.153 "adrfam": "IPv4", 00:10:25.153 "traddr": "10.0.0.3", 00:10:25.153 "trsvcid": "4420" 00:10:25.153 }, 00:10:25.153 "peer_address": { 00:10:25.153 "trtype": "TCP", 00:10:25.153 "adrfam": "IPv4", 00:10:25.153 "traddr": "10.0.0.1", 00:10:25.153 "trsvcid": "49648" 00:10:25.153 }, 00:10:25.153 "auth": { 00:10:25.153 "state": "completed", 00:10:25.153 "digest": "sha384", 00:10:25.153 "dhgroup": "ffdhe6144" 00:10:25.153 } 00:10:25.153 } 00:10:25.153 ]' 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.153 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.421 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:25.421 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:25.987 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.246 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.812 00:10:26.813 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.813 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.813 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.071 { 00:10:27.071 "cntlid": 83, 00:10:27.071 "qid": 0, 00:10:27.071 "state": "enabled", 00:10:27.071 "thread": "nvmf_tgt_poll_group_000", 00:10:27.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:27.071 "listen_address": { 00:10:27.071 "trtype": "TCP", 00:10:27.071 "adrfam": "IPv4", 00:10:27.071 "traddr": "10.0.0.3", 00:10:27.071 "trsvcid": "4420" 00:10:27.071 }, 00:10:27.071 "peer_address": { 00:10:27.071 "trtype": "TCP", 00:10:27.071 "adrfam": "IPv4", 00:10:27.071 "traddr": "10.0.0.1", 00:10:27.071 "trsvcid": "49686" 00:10:27.071 }, 00:10:27.071 "auth": { 00:10:27.071 "state": "completed", 00:10:27.071 "digest": "sha384", 00:10:27.071 "dhgroup": "ffdhe6144" 00:10:27.071 } 00:10:27.071 } 00:10:27.071 ]' 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.071 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.330 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:27.330 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.266 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.834 00:10:28.834 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.834 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.834 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.092 { 00:10:29.092 "cntlid": 85, 00:10:29.092 "qid": 0, 00:10:29.092 "state": "enabled", 00:10:29.092 "thread": "nvmf_tgt_poll_group_000", 00:10:29.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:29.092 "listen_address": { 00:10:29.092 "trtype": "TCP", 00:10:29.092 "adrfam": "IPv4", 00:10:29.092 "traddr": "10.0.0.3", 00:10:29.092 "trsvcid": "4420" 00:10:29.092 }, 00:10:29.092 "peer_address": { 00:10:29.092 "trtype": "TCP", 00:10:29.092 "adrfam": "IPv4", 00:10:29.092 "traddr": "10.0.0.1", 00:10:29.092 "trsvcid": "49716" 00:10:29.092 }, 00:10:29.092 "auth": { 00:10:29.092 "state": "completed", 00:10:29.092 "digest": "sha384", 00:10:29.092 "dhgroup": "ffdhe6144" 00:10:29.092 } 00:10:29.092 } 00:10:29.092 ]' 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:29.092 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.350 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.350 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.350 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.608 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:29.608 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:30.175 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.433 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.692 00:10:30.692 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.692 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.692 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.951 { 00:10:30.951 "cntlid": 87, 00:10:30.951 "qid": 0, 00:10:30.951 "state": "enabled", 00:10:30.951 "thread": "nvmf_tgt_poll_group_000", 00:10:30.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:30.951 "listen_address": { 00:10:30.951 "trtype": "TCP", 00:10:30.951 "adrfam": "IPv4", 00:10:30.951 "traddr": "10.0.0.3", 00:10:30.951 "trsvcid": "4420" 00:10:30.951 }, 00:10:30.951 "peer_address": { 00:10:30.951 "trtype": "TCP", 00:10:30.951 "adrfam": "IPv4", 00:10:30.951 "traddr": "10.0.0.1", 00:10:30.951 "trsvcid": "49744" 00:10:30.951 }, 00:10:30.951 "auth": { 00:10:30.951 "state": "completed", 00:10:30.951 "digest": "sha384", 00:10:30.951 "dhgroup": "ffdhe6144" 00:10:30.951 } 00:10:30.951 } 00:10:30.951 ]' 00:10:30.951 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.209 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.467 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:31.467 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:32.033 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:32.034 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.601 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.860 00:10:33.118 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:33.118 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.118 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.377 { 00:10:33.377 "cntlid": 89, 00:10:33.377 "qid": 0, 00:10:33.377 "state": "enabled", 00:10:33.377 "thread": "nvmf_tgt_poll_group_000", 00:10:33.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:33.377 "listen_address": { 00:10:33.377 "trtype": "TCP", 00:10:33.377 "adrfam": "IPv4", 00:10:33.377 "traddr": "10.0.0.3", 00:10:33.377 "trsvcid": "4420" 00:10:33.377 }, 00:10:33.377 "peer_address": { 00:10:33.377 "trtype": "TCP", 00:10:33.377 "adrfam": "IPv4", 00:10:33.377 "traddr": "10.0.0.1", 00:10:33.377 "trsvcid": "49784" 00:10:33.377 }, 00:10:33.377 "auth": { 00:10:33.377 "state": "completed", 00:10:33.377 "digest": "sha384", 00:10:33.377 "dhgroup": "ffdhe8192" 00:10:33.377 } 00:10:33.377 } 00:10:33.377 ]' 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.377 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.636 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:33.636 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:34.570 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.570 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.571 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.571 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.571 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.571 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.137 00:10:35.137 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.137 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.137 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.396 { 00:10:35.396 "cntlid": 91, 00:10:35.396 "qid": 0, 00:10:35.396 "state": "enabled", 00:10:35.396 "thread": "nvmf_tgt_poll_group_000", 00:10:35.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:35.396 "listen_address": { 00:10:35.396 "trtype": "TCP", 00:10:35.396 "adrfam": "IPv4", 00:10:35.396 "traddr": "10.0.0.3", 00:10:35.396 "trsvcid": "4420" 00:10:35.396 }, 00:10:35.396 "peer_address": { 00:10:35.396 "trtype": "TCP", 00:10:35.396 "adrfam": "IPv4", 00:10:35.396 "traddr": "10.0.0.1", 00:10:35.396 "trsvcid": "54452" 00:10:35.396 }, 00:10:35.396 "auth": { 00:10:35.396 "state": "completed", 00:10:35.396 "digest": "sha384", 00:10:35.396 "dhgroup": "ffdhe8192" 00:10:35.396 } 00:10:35.396 } 00:10:35.396 ]' 00:10:35.396 12:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.396 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:35.396 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.654 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:35.654 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.654 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.654 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.654 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.912 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:35.912 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:36.478 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:36.478 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.737 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.303 00:10:37.303 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.303 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.304 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.562 { 00:10:37.562 "cntlid": 93, 00:10:37.562 "qid": 0, 00:10:37.562 "state": "enabled", 00:10:37.562 "thread": "nvmf_tgt_poll_group_000", 00:10:37.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:37.562 "listen_address": { 00:10:37.562 "trtype": "TCP", 00:10:37.562 "adrfam": "IPv4", 00:10:37.562 "traddr": "10.0.0.3", 00:10:37.562 "trsvcid": "4420" 00:10:37.562 }, 00:10:37.562 "peer_address": { 00:10:37.562 "trtype": "TCP", 00:10:37.562 "adrfam": "IPv4", 00:10:37.562 "traddr": "10.0.0.1", 00:10:37.562 "trsvcid": "54472" 00:10:37.562 }, 00:10:37.562 "auth": { 00:10:37.562 "state": "completed", 00:10:37.562 "digest": "sha384", 00:10:37.562 "dhgroup": "ffdhe8192" 00:10:37.562 } 00:10:37.562 } 00:10:37.562 ]' 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.562 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.821 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:37.821 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.821 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.822 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.822 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.080 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:38.080 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:38.647 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.905 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.163 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.163 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:39.163 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.163 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.728 00:10:39.728 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.728 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.728 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.986 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.987 { 00:10:39.987 "cntlid": 95, 00:10:39.987 "qid": 0, 00:10:39.987 "state": "enabled", 00:10:39.987 "thread": "nvmf_tgt_poll_group_000", 00:10:39.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:39.987 "listen_address": { 00:10:39.987 "trtype": "TCP", 00:10:39.987 "adrfam": "IPv4", 00:10:39.987 "traddr": "10.0.0.3", 00:10:39.987 "trsvcid": "4420" 00:10:39.987 }, 00:10:39.987 "peer_address": { 00:10:39.987 "trtype": "TCP", 00:10:39.987 "adrfam": "IPv4", 00:10:39.987 "traddr": "10.0.0.1", 00:10:39.987 "trsvcid": "54504" 00:10:39.987 }, 00:10:39.987 "auth": { 00:10:39.987 "state": "completed", 00:10:39.987 "digest": "sha384", 00:10:39.987 "dhgroup": "ffdhe8192" 00:10:39.987 } 00:10:39.987 } 00:10:39.987 ]' 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.987 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.553 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:40.553 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:41.120 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:41.378 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:10:41.378 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.378 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:41.378 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:41.378 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.379 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.680 00:10:41.680 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.680 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.680 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.970 { 00:10:41.970 "cntlid": 97, 00:10:41.970 "qid": 0, 00:10:41.970 "state": "enabled", 00:10:41.970 "thread": "nvmf_tgt_poll_group_000", 00:10:41.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:41.970 "listen_address": { 00:10:41.970 "trtype": "TCP", 00:10:41.970 "adrfam": "IPv4", 00:10:41.970 "traddr": "10.0.0.3", 00:10:41.970 "trsvcid": "4420" 00:10:41.970 }, 00:10:41.970 "peer_address": { 00:10:41.970 "trtype": "TCP", 00:10:41.970 "adrfam": "IPv4", 00:10:41.970 "traddr": "10.0.0.1", 00:10:41.970 "trsvcid": "54536" 00:10:41.970 }, 00:10:41.970 "auth": { 00:10:41.970 "state": "completed", 00:10:41.970 "digest": "sha512", 00:10:41.970 "dhgroup": "null" 00:10:41.970 } 00:10:41.970 } 00:10:41.970 ]' 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.970 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.241 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:42.241 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.178 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.437 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.437 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.437 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.437 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.696 00:10:43.696 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.696 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.696 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.955 { 00:10:43.955 "cntlid": 99, 00:10:43.955 "qid": 0, 00:10:43.955 "state": "enabled", 00:10:43.955 "thread": "nvmf_tgt_poll_group_000", 00:10:43.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:43.955 "listen_address": { 00:10:43.955 "trtype": "TCP", 00:10:43.955 "adrfam": "IPv4", 00:10:43.955 "traddr": "10.0.0.3", 00:10:43.955 "trsvcid": "4420" 00:10:43.955 }, 00:10:43.955 "peer_address": { 00:10:43.955 "trtype": "TCP", 00:10:43.955 "adrfam": "IPv4", 00:10:43.955 "traddr": "10.0.0.1", 00:10:43.955 "trsvcid": "56416" 00:10:43.955 }, 00:10:43.955 "auth": { 00:10:43.955 "state": "completed", 00:10:43.955 "digest": "sha512", 00:10:43.955 "dhgroup": "null" 00:10:43.955 } 00:10:43.955 } 00:10:43.955 ]' 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:43.955 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.215 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.215 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.215 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.474 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:44.474 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:45.042 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.302 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.561 00:10:45.561 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.561 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.561 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.821 { 00:10:45.821 "cntlid": 101, 00:10:45.821 "qid": 0, 00:10:45.821 "state": "enabled", 00:10:45.821 "thread": "nvmf_tgt_poll_group_000", 00:10:45.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:45.821 "listen_address": { 00:10:45.821 "trtype": "TCP", 00:10:45.821 "adrfam": "IPv4", 00:10:45.821 "traddr": "10.0.0.3", 00:10:45.821 "trsvcid": "4420" 00:10:45.821 }, 00:10:45.821 "peer_address": { 00:10:45.821 "trtype": "TCP", 00:10:45.821 "adrfam": "IPv4", 00:10:45.821 "traddr": "10.0.0.1", 00:10:45.821 "trsvcid": "56450" 00:10:45.821 }, 00:10:45.821 "auth": { 00:10:45.821 "state": "completed", 00:10:45.821 "digest": "sha512", 00:10:45.821 "dhgroup": "null" 00:10:45.821 } 00:10:45.821 } 00:10:45.821 ]' 00:10:45.821 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.080 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.340 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:46.340 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:46.908 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.166 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.424 00:10:47.424 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.424 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.424 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.683 { 00:10:47.683 "cntlid": 103, 00:10:47.683 "qid": 0, 00:10:47.683 "state": "enabled", 00:10:47.683 "thread": "nvmf_tgt_poll_group_000", 00:10:47.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:47.683 "listen_address": { 00:10:47.683 "trtype": "TCP", 00:10:47.683 "adrfam": "IPv4", 00:10:47.683 "traddr": "10.0.0.3", 00:10:47.683 "trsvcid": "4420" 00:10:47.683 }, 00:10:47.683 "peer_address": { 00:10:47.683 "trtype": "TCP", 00:10:47.683 "adrfam": "IPv4", 00:10:47.683 "traddr": "10.0.0.1", 00:10:47.683 "trsvcid": "56472" 00:10:47.683 }, 00:10:47.683 "auth": { 00:10:47.683 "state": "completed", 00:10:47.683 "digest": "sha512", 00:10:47.683 "dhgroup": "null" 00:10:47.683 } 00:10:47.683 } 00:10:47.683 ]' 00:10:47.683 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.942 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.200 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:48.200 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:48.767 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.025 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.591 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.591 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.850 { 00:10:49.850 "cntlid": 105, 00:10:49.850 "qid": 0, 00:10:49.850 "state": "enabled", 00:10:49.850 "thread": "nvmf_tgt_poll_group_000", 00:10:49.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:49.850 "listen_address": { 00:10:49.850 "trtype": "TCP", 00:10:49.850 "adrfam": "IPv4", 00:10:49.850 "traddr": "10.0.0.3", 00:10:49.850 "trsvcid": "4420" 00:10:49.850 }, 00:10:49.850 "peer_address": { 00:10:49.850 "trtype": "TCP", 00:10:49.850 "adrfam": "IPv4", 00:10:49.850 "traddr": "10.0.0.1", 00:10:49.850 "trsvcid": "56500" 00:10:49.850 }, 00:10:49.850 "auth": { 00:10:49.850 "state": "completed", 00:10:49.850 "digest": "sha512", 00:10:49.850 "dhgroup": "ffdhe2048" 00:10:49.850 } 00:10:49.850 } 00:10:49.850 ]' 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.850 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.108 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:50.108 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:50.675 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.933 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:51.193 00:10:51.193 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.193 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.193 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.452 { 00:10:51.452 "cntlid": 107, 00:10:51.452 "qid": 0, 00:10:51.452 "state": "enabled", 00:10:51.452 "thread": "nvmf_tgt_poll_group_000", 00:10:51.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:51.452 "listen_address": { 00:10:51.452 "trtype": "TCP", 00:10:51.452 "adrfam": "IPv4", 00:10:51.452 "traddr": "10.0.0.3", 00:10:51.452 "trsvcid": "4420" 00:10:51.452 }, 00:10:51.452 "peer_address": { 00:10:51.452 "trtype": "TCP", 00:10:51.452 "adrfam": "IPv4", 00:10:51.452 "traddr": "10.0.0.1", 00:10:51.452 "trsvcid": "56532" 00:10:51.452 }, 00:10:51.452 "auth": { 00:10:51.452 "state": "completed", 00:10:51.452 "digest": "sha512", 00:10:51.452 "dhgroup": "ffdhe2048" 00:10:51.452 } 00:10:51.452 } 00:10:51.452 ]' 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:51.452 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.713 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:51.713 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.713 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.713 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.713 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.973 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:51.973 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:52.539 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.798 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:53.056 00:10:53.056 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.056 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.056 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.315 { 00:10:53.315 "cntlid": 109, 00:10:53.315 "qid": 0, 00:10:53.315 "state": "enabled", 00:10:53.315 "thread": "nvmf_tgt_poll_group_000", 00:10:53.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:53.315 "listen_address": { 00:10:53.315 "trtype": "TCP", 00:10:53.315 "adrfam": "IPv4", 00:10:53.315 "traddr": "10.0.0.3", 00:10:53.315 "trsvcid": "4420" 00:10:53.315 }, 00:10:53.315 "peer_address": { 00:10:53.315 "trtype": "TCP", 00:10:53.315 "adrfam": "IPv4", 00:10:53.315 "traddr": "10.0.0.1", 00:10:53.315 "trsvcid": "56550" 00:10:53.315 }, 00:10:53.315 "auth": { 00:10:53.315 "state": "completed", 00:10:53.315 "digest": "sha512", 00:10:53.315 "dhgroup": "ffdhe2048" 00:10:53.315 } 00:10:53.315 } 00:10:53.315 ]' 00:10:53.315 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.573 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:53.573 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.573 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:53.573 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.573 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.573 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.573 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.831 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:53.831 12:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:54.398 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:54.965 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:55.224 00:10:55.224 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.224 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.224 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.483 { 00:10:55.483 "cntlid": 111, 00:10:55.483 "qid": 0, 00:10:55.483 "state": "enabled", 00:10:55.483 "thread": "nvmf_tgt_poll_group_000", 00:10:55.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:55.483 "listen_address": { 00:10:55.483 "trtype": "TCP", 00:10:55.483 "adrfam": "IPv4", 00:10:55.483 "traddr": "10.0.0.3", 00:10:55.483 "trsvcid": "4420" 00:10:55.483 }, 00:10:55.483 "peer_address": { 00:10:55.483 "trtype": "TCP", 00:10:55.483 "adrfam": "IPv4", 00:10:55.483 "traddr": "10.0.0.1", 00:10:55.483 "trsvcid": "43732" 00:10:55.483 }, 00:10:55.483 "auth": { 00:10:55.483 "state": "completed", 00:10:55.483 "digest": "sha512", 00:10:55.483 "dhgroup": "ffdhe2048" 00:10:55.483 } 00:10:55.483 } 00:10:55.483 ]' 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:55.483 12:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.483 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:55.483 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.483 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.483 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.483 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.741 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:55.741 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:56.674 12:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.674 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.239 00:10:57.239 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.239 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.239 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.497 { 00:10:57.497 "cntlid": 113, 00:10:57.497 "qid": 0, 00:10:57.497 "state": "enabled", 00:10:57.497 "thread": "nvmf_tgt_poll_group_000", 00:10:57.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:57.497 "listen_address": { 00:10:57.497 "trtype": "TCP", 00:10:57.497 "adrfam": "IPv4", 00:10:57.497 "traddr": "10.0.0.3", 00:10:57.497 "trsvcid": "4420" 00:10:57.497 }, 00:10:57.497 "peer_address": { 00:10:57.497 "trtype": "TCP", 00:10:57.497 "adrfam": "IPv4", 00:10:57.497 "traddr": "10.0.0.1", 00:10:57.497 "trsvcid": "43762" 00:10:57.497 }, 00:10:57.497 "auth": { 00:10:57.497 "state": "completed", 00:10:57.497 "digest": "sha512", 00:10:57.497 "dhgroup": "ffdhe3072" 00:10:57.497 } 00:10:57.497 } 00:10:57.497 ]' 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:57.497 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.497 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.497 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.497 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.497 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.497 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.756 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:57.756 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:10:58.324 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:58.325 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.640 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.897 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.157 { 00:10:59.157 "cntlid": 115, 00:10:59.157 "qid": 0, 00:10:59.157 "state": "enabled", 00:10:59.157 "thread": "nvmf_tgt_poll_group_000", 00:10:59.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:10:59.157 "listen_address": { 00:10:59.157 "trtype": "TCP", 00:10:59.157 "adrfam": "IPv4", 00:10:59.157 "traddr": "10.0.0.3", 00:10:59.157 "trsvcid": "4420" 00:10:59.157 }, 00:10:59.157 "peer_address": { 00:10:59.157 "trtype": "TCP", 00:10:59.157 "adrfam": "IPv4", 00:10:59.157 "traddr": "10.0.0.1", 00:10:59.157 "trsvcid": "43778" 00:10:59.157 }, 00:10:59.157 "auth": { 00:10:59.157 "state": "completed", 00:10:59.157 "digest": "sha512", 00:10:59.157 "dhgroup": "ffdhe3072" 00:10:59.157 } 00:10:59.157 } 00:10:59.157 ]' 00:10:59.157 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.416 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.675 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:10:59.675 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:00.244 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.503 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.763 00:11:00.763 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.763 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.763 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.022 { 00:11:01.022 "cntlid": 117, 00:11:01.022 "qid": 0, 00:11:01.022 "state": "enabled", 00:11:01.022 "thread": "nvmf_tgt_poll_group_000", 00:11:01.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:01.022 "listen_address": { 00:11:01.022 "trtype": "TCP", 00:11:01.022 "adrfam": "IPv4", 00:11:01.022 "traddr": "10.0.0.3", 00:11:01.022 "trsvcid": "4420" 00:11:01.022 }, 00:11:01.022 "peer_address": { 00:11:01.022 "trtype": "TCP", 00:11:01.022 "adrfam": "IPv4", 00:11:01.022 "traddr": "10.0.0.1", 00:11:01.022 "trsvcid": "43826" 00:11:01.022 }, 00:11:01.022 "auth": { 00:11:01.022 "state": "completed", 00:11:01.022 "digest": "sha512", 00:11:01.022 "dhgroup": "ffdhe3072" 00:11:01.022 } 00:11:01.022 } 00:11:01.022 ]' 00:11:01.022 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.282 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.541 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:01.541 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:02.109 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.369 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.628 00:11:02.887 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.887 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.887 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.146 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.146 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.146 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.146 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.146 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.146 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.146 { 00:11:03.146 "cntlid": 119, 00:11:03.146 "qid": 0, 00:11:03.146 "state": "enabled", 00:11:03.146 "thread": "nvmf_tgt_poll_group_000", 00:11:03.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:03.146 "listen_address": { 00:11:03.146 "trtype": "TCP", 00:11:03.146 "adrfam": "IPv4", 00:11:03.146 "traddr": "10.0.0.3", 00:11:03.147 "trsvcid": "4420" 00:11:03.147 }, 00:11:03.147 "peer_address": { 00:11:03.147 "trtype": "TCP", 00:11:03.147 "adrfam": "IPv4", 00:11:03.147 "traddr": "10.0.0.1", 00:11:03.147 "trsvcid": "43854" 00:11:03.147 }, 00:11:03.147 "auth": { 00:11:03.147 "state": "completed", 00:11:03.147 "digest": "sha512", 00:11:03.147 "dhgroup": "ffdhe3072" 00:11:03.147 } 00:11:03.147 } 00:11:03.147 ]' 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.147 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.406 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:03.406 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.343 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.603 00:11:04.603 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.603 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.603 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.862 { 00:11:04.862 "cntlid": 121, 00:11:04.862 "qid": 0, 00:11:04.862 "state": "enabled", 00:11:04.862 "thread": "nvmf_tgt_poll_group_000", 00:11:04.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:04.862 "listen_address": { 00:11:04.862 "trtype": "TCP", 00:11:04.862 "adrfam": "IPv4", 00:11:04.862 "traddr": "10.0.0.3", 00:11:04.862 "trsvcid": "4420" 00:11:04.862 }, 00:11:04.862 "peer_address": { 00:11:04.862 "trtype": "TCP", 00:11:04.862 "adrfam": "IPv4", 00:11:04.862 "traddr": "10.0.0.1", 00:11:04.862 "trsvcid": "56184" 00:11:04.862 }, 00:11:04.862 "auth": { 00:11:04.862 "state": "completed", 00:11:04.862 "digest": "sha512", 00:11:04.862 "dhgroup": "ffdhe4096" 00:11:04.862 } 00:11:04.862 } 00:11:04.862 ]' 00:11:04.862 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.121 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.380 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:05.380 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:05.949 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.208 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.776 00:11:06.776 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.776 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.776 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.036 { 00:11:07.036 "cntlid": 123, 00:11:07.036 "qid": 0, 00:11:07.036 "state": "enabled", 00:11:07.036 "thread": "nvmf_tgt_poll_group_000", 00:11:07.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:07.036 "listen_address": { 00:11:07.036 "trtype": "TCP", 00:11:07.036 "adrfam": "IPv4", 00:11:07.036 "traddr": "10.0.0.3", 00:11:07.036 "trsvcid": "4420" 00:11:07.036 }, 00:11:07.036 "peer_address": { 00:11:07.036 "trtype": "TCP", 00:11:07.036 "adrfam": "IPv4", 00:11:07.036 "traddr": "10.0.0.1", 00:11:07.036 "trsvcid": "56208" 00:11:07.036 }, 00:11:07.036 "auth": { 00:11:07.036 "state": "completed", 00:11:07.036 "digest": "sha512", 00:11:07.036 "dhgroup": "ffdhe4096" 00:11:07.036 } 00:11:07.036 } 00:11:07.036 ]' 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.036 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.295 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:07.295 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:07.863 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:08.124 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:08.398 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:08.398 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.398 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:08.398 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.399 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.682 00:11:08.682 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.682 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.682 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.948 { 00:11:08.948 "cntlid": 125, 00:11:08.948 "qid": 0, 00:11:08.948 "state": "enabled", 00:11:08.948 "thread": "nvmf_tgt_poll_group_000", 00:11:08.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:08.948 "listen_address": { 00:11:08.948 "trtype": "TCP", 00:11:08.948 "adrfam": "IPv4", 00:11:08.948 "traddr": "10.0.0.3", 00:11:08.948 "trsvcid": "4420" 00:11:08.948 }, 00:11:08.948 "peer_address": { 00:11:08.948 "trtype": "TCP", 00:11:08.948 "adrfam": "IPv4", 00:11:08.948 "traddr": "10.0.0.1", 00:11:08.948 "trsvcid": "56228" 00:11:08.948 }, 00:11:08.948 "auth": { 00:11:08.948 "state": "completed", 00:11:08.948 "digest": "sha512", 00:11:08.948 "dhgroup": "ffdhe4096" 00:11:08.948 } 00:11:08.948 } 00:11:08.948 ]' 00:11:08.948 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.949 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:08.949 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.949 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:08.949 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.207 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.207 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.207 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.466 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:09.466 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:10.034 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.294 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.554 00:11:10.554 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.554 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.554 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.813 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.073 { 00:11:11.073 "cntlid": 127, 00:11:11.073 "qid": 0, 00:11:11.073 "state": "enabled", 00:11:11.073 "thread": "nvmf_tgt_poll_group_000", 00:11:11.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:11.073 "listen_address": { 00:11:11.073 "trtype": "TCP", 00:11:11.073 "adrfam": "IPv4", 00:11:11.073 "traddr": "10.0.0.3", 00:11:11.073 "trsvcid": "4420" 00:11:11.073 }, 00:11:11.073 "peer_address": { 00:11:11.073 "trtype": "TCP", 00:11:11.073 "adrfam": "IPv4", 00:11:11.073 "traddr": "10.0.0.1", 00:11:11.073 "trsvcid": "56246" 00:11:11.073 }, 00:11:11.073 "auth": { 00:11:11.073 "state": "completed", 00:11:11.073 "digest": "sha512", 00:11:11.073 "dhgroup": "ffdhe4096" 00:11:11.073 } 00:11:11.073 } 00:11:11.073 ]' 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.073 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.332 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:11.332 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:11.901 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.160 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.729 00:11:12.729 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.729 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.729 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.988 { 00:11:12.988 "cntlid": 129, 00:11:12.988 "qid": 0, 00:11:12.988 "state": "enabled", 00:11:12.988 "thread": "nvmf_tgt_poll_group_000", 00:11:12.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:12.988 "listen_address": { 00:11:12.988 "trtype": "TCP", 00:11:12.988 "adrfam": "IPv4", 00:11:12.988 "traddr": "10.0.0.3", 00:11:12.988 "trsvcid": "4420" 00:11:12.988 }, 00:11:12.988 "peer_address": { 00:11:12.988 "trtype": "TCP", 00:11:12.988 "adrfam": "IPv4", 00:11:12.988 "traddr": "10.0.0.1", 00:11:12.988 "trsvcid": "56274" 00:11:12.988 }, 00:11:12.988 "auth": { 00:11:12.988 "state": "completed", 00:11:12.988 "digest": "sha512", 00:11:12.988 "dhgroup": "ffdhe6144" 00:11:12.988 } 00:11:12.988 } 00:11:12.988 ]' 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.988 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.557 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:13.557 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:14.124 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.383 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.642 00:11:14.642 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.642 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.642 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.901 { 00:11:14.901 "cntlid": 131, 00:11:14.901 "qid": 0, 00:11:14.901 "state": "enabled", 00:11:14.901 "thread": "nvmf_tgt_poll_group_000", 00:11:14.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:14.901 "listen_address": { 00:11:14.901 "trtype": "TCP", 00:11:14.901 "adrfam": "IPv4", 00:11:14.901 "traddr": "10.0.0.3", 00:11:14.901 "trsvcid": "4420" 00:11:14.901 }, 00:11:14.901 "peer_address": { 00:11:14.901 "trtype": "TCP", 00:11:14.901 "adrfam": "IPv4", 00:11:14.901 "traddr": "10.0.0.1", 00:11:14.901 "trsvcid": "50722" 00:11:14.901 }, 00:11:14.901 "auth": { 00:11:14.901 "state": "completed", 00:11:14.901 "digest": "sha512", 00:11:14.901 "dhgroup": "ffdhe6144" 00:11:14.901 } 00:11:14.901 } 00:11:14.901 ]' 00:11:14.901 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.160 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.418 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:15.418 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:15.984 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.243 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.809 00:11:16.809 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.809 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.809 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.068 { 00:11:17.068 "cntlid": 133, 00:11:17.068 "qid": 0, 00:11:17.068 "state": "enabled", 00:11:17.068 "thread": "nvmf_tgt_poll_group_000", 00:11:17.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:17.068 "listen_address": { 00:11:17.068 "trtype": "TCP", 00:11:17.068 "adrfam": "IPv4", 00:11:17.068 "traddr": "10.0.0.3", 00:11:17.068 "trsvcid": "4420" 00:11:17.068 }, 00:11:17.068 "peer_address": { 00:11:17.068 "trtype": "TCP", 00:11:17.068 "adrfam": "IPv4", 00:11:17.068 "traddr": "10.0.0.1", 00:11:17.068 "trsvcid": "50756" 00:11:17.068 }, 00:11:17.068 "auth": { 00:11:17.068 "state": "completed", 00:11:17.068 "digest": "sha512", 00:11:17.068 "dhgroup": "ffdhe6144" 00:11:17.068 } 00:11:17.068 } 00:11:17.068 ]' 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:17.068 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.327 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:17.327 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.327 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.327 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.327 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.584 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:17.584 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:18.149 12:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.406 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.969 00:11:18.969 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.969 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.969 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.226 { 00:11:19.226 "cntlid": 135, 00:11:19.226 "qid": 0, 00:11:19.226 "state": "enabled", 00:11:19.226 "thread": "nvmf_tgt_poll_group_000", 00:11:19.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:19.226 "listen_address": { 00:11:19.226 "trtype": "TCP", 00:11:19.226 "adrfam": "IPv4", 00:11:19.226 "traddr": "10.0.0.3", 00:11:19.226 "trsvcid": "4420" 00:11:19.226 }, 00:11:19.226 "peer_address": { 00:11:19.226 "trtype": "TCP", 00:11:19.226 "adrfam": "IPv4", 00:11:19.226 "traddr": "10.0.0.1", 00:11:19.226 "trsvcid": "50794" 00:11:19.226 }, 00:11:19.226 "auth": { 00:11:19.226 "state": "completed", 00:11:19.226 "digest": "sha512", 00:11:19.226 "dhgroup": "ffdhe6144" 00:11:19.226 } 00:11:19.226 } 00:11:19.226 ]' 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.226 12:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.790 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:19.790 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:20.047 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.047 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:20.047 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.047 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.323 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.323 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.323 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.323 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:20.323 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.580 12:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.580 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.580 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.580 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.581 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.145 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.145 { 00:11:21.145 "cntlid": 137, 00:11:21.145 "qid": 0, 00:11:21.145 "state": "enabled", 00:11:21.145 "thread": "nvmf_tgt_poll_group_000", 00:11:21.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:21.145 "listen_address": { 00:11:21.145 "trtype": "TCP", 00:11:21.145 "adrfam": "IPv4", 00:11:21.145 "traddr": "10.0.0.3", 00:11:21.145 "trsvcid": "4420" 00:11:21.145 }, 00:11:21.145 "peer_address": { 00:11:21.145 "trtype": "TCP", 00:11:21.145 "adrfam": "IPv4", 00:11:21.145 "traddr": "10.0.0.1", 00:11:21.145 "trsvcid": "50804" 00:11:21.145 }, 00:11:21.145 "auth": { 00:11:21.145 "state": "completed", 00:11:21.145 "digest": "sha512", 00:11:21.145 "dhgroup": "ffdhe8192" 00:11:21.145 } 00:11:21.145 } 00:11:21.145 ]' 00:11:21.145 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.403 12:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.661 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:21.661 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:22.229 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.488 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.488 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.488 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.488 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.488 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.056 00:11:23.056 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.056 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.056 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.315 { 00:11:23.315 "cntlid": 139, 00:11:23.315 "qid": 0, 00:11:23.315 "state": "enabled", 00:11:23.315 "thread": "nvmf_tgt_poll_group_000", 00:11:23.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:23.315 "listen_address": { 00:11:23.315 "trtype": "TCP", 00:11:23.315 "adrfam": "IPv4", 00:11:23.315 "traddr": "10.0.0.3", 00:11:23.315 "trsvcid": "4420" 00:11:23.315 }, 00:11:23.315 "peer_address": { 00:11:23.315 "trtype": "TCP", 00:11:23.315 "adrfam": "IPv4", 00:11:23.315 "traddr": "10.0.0.1", 00:11:23.315 "trsvcid": "50844" 00:11:23.315 }, 00:11:23.315 "auth": { 00:11:23.315 "state": "completed", 00:11:23.315 "digest": "sha512", 00:11:23.315 "dhgroup": "ffdhe8192" 00:11:23.315 } 00:11:23.315 } 00:11:23.315 ]' 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:23.315 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.574 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.574 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.574 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.574 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:23.574 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: --dhchap-ctrl-secret DHHC-1:02:ZGUyYTk2NTZkYjVjMTkwYjQ5ODAwYjkwMzQ4YzNhZjA5N2M1NDY2Y2Y0NjE3YTY3+bSoPg==: 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:24.510 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.769 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.337 00:11:25.337 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.337 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.337 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.596 { 00:11:25.596 "cntlid": 141, 00:11:25.596 "qid": 0, 00:11:25.596 "state": "enabled", 00:11:25.596 "thread": "nvmf_tgt_poll_group_000", 00:11:25.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:25.596 "listen_address": { 00:11:25.596 "trtype": "TCP", 00:11:25.596 "adrfam": "IPv4", 00:11:25.596 "traddr": "10.0.0.3", 00:11:25.596 "trsvcid": "4420" 00:11:25.596 }, 00:11:25.596 "peer_address": { 00:11:25.596 "trtype": "TCP", 00:11:25.596 "adrfam": "IPv4", 00:11:25.596 "traddr": "10.0.0.1", 00:11:25.596 "trsvcid": "50284" 00:11:25.596 }, 00:11:25.596 "auth": { 00:11:25.596 "state": "completed", 00:11:25.596 "digest": "sha512", 00:11:25.596 "dhgroup": "ffdhe8192" 00:11:25.596 } 00:11:25.596 } 00:11:25.596 ]' 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.596 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.855 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:25.855 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:01:NmViZmZiMGZhMzY1MGI4MmIwM2JjZmNmYjdhNGY4ZDhz95cs: 00:11:26.421 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:26.680 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.940 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.507 00:11:27.507 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.507 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.508 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.766 { 00:11:27.766 "cntlid": 143, 00:11:27.766 "qid": 0, 00:11:27.766 "state": "enabled", 00:11:27.766 "thread": "nvmf_tgt_poll_group_000", 00:11:27.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:27.766 "listen_address": { 00:11:27.766 "trtype": "TCP", 00:11:27.766 "adrfam": "IPv4", 00:11:27.766 "traddr": "10.0.0.3", 00:11:27.766 "trsvcid": "4420" 00:11:27.766 }, 00:11:27.766 "peer_address": { 00:11:27.766 "trtype": "TCP", 00:11:27.766 "adrfam": "IPv4", 00:11:27.766 "traddr": "10.0.0.1", 00:11:27.766 "trsvcid": "50306" 00:11:27.766 }, 00:11:27.766 "auth": { 00:11:27.766 "state": "completed", 00:11:27.766 "digest": "sha512", 00:11:27.766 "dhgroup": "ffdhe8192" 00:11:27.766 } 00:11:27.766 } 00:11:27.766 ]' 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.766 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.767 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.767 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.025 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:28.025 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:28.592 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.851 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.110 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.110 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.110 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.110 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.678 00:11:29.678 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.678 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.678 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.937 { 00:11:29.937 "cntlid": 145, 00:11:29.937 "qid": 0, 00:11:29.937 "state": "enabled", 00:11:29.937 "thread": "nvmf_tgt_poll_group_000", 00:11:29.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:29.937 "listen_address": { 00:11:29.937 "trtype": "TCP", 00:11:29.937 "adrfam": "IPv4", 00:11:29.937 "traddr": "10.0.0.3", 00:11:29.937 "trsvcid": "4420" 00:11:29.937 }, 00:11:29.937 "peer_address": { 00:11:29.937 "trtype": "TCP", 00:11:29.937 "adrfam": "IPv4", 00:11:29.937 "traddr": "10.0.0.1", 00:11:29.937 "trsvcid": "50322" 00:11:29.937 }, 00:11:29.937 "auth": { 00:11:29.937 "state": "completed", 00:11:29.937 "digest": "sha512", 00:11:29.937 "dhgroup": "ffdhe8192" 00:11:29.937 } 00:11:29.937 } 00:11:29.937 ]' 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.937 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.197 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:30.197 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:00:OGMyYjM4OTYxNTgyZjkxNzg1OTViOWM2Njk3YjdjZDM3NjRlZDlkY2NiYzg4MDJjfK5jYw==: --dhchap-ctrl-secret DHHC-1:03:ZDkwYjhkNTI4MGIwYjkwNGVhYzY2ZDExOGEzOWRmNTNjZmY5YzVmYzc0NjZiMDY4YTAzMjgxNWI5NzA5MWZiYSgp6T4=: 00:11:30.764 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:30.765 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:31.334 request: 00:11:31.334 { 00:11:31.334 "name": "nvme0", 00:11:31.334 "trtype": "tcp", 00:11:31.334 "traddr": "10.0.0.3", 00:11:31.334 "adrfam": "ipv4", 00:11:31.334 "trsvcid": "4420", 00:11:31.334 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:31.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:31.334 "prchk_reftag": false, 00:11:31.334 "prchk_guard": false, 00:11:31.334 "hdgst": false, 00:11:31.334 "ddgst": false, 00:11:31.334 "dhchap_key": "key2", 00:11:31.334 "allow_unrecognized_csi": false, 00:11:31.334 "method": "bdev_nvme_attach_controller", 00:11:31.334 "req_id": 1 00:11:31.334 } 00:11:31.334 Got JSON-RPC error response 00:11:31.334 response: 00:11:31.334 { 00:11:31.334 "code": -5, 00:11:31.334 "message": "Input/output error" 00:11:31.334 } 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:31.334 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:31.593 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.593 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:31.593 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.593 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:31.593 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:31.593 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:32.161 request: 00:11:32.161 { 00:11:32.161 "name": "nvme0", 00:11:32.161 "trtype": "tcp", 00:11:32.161 "traddr": "10.0.0.3", 00:11:32.161 "adrfam": "ipv4", 00:11:32.161 "trsvcid": "4420", 00:11:32.161 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:32.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:32.161 "prchk_reftag": false, 00:11:32.161 "prchk_guard": false, 00:11:32.161 "hdgst": false, 00:11:32.161 "ddgst": false, 00:11:32.161 "dhchap_key": "key1", 00:11:32.161 "dhchap_ctrlr_key": "ckey2", 00:11:32.161 "allow_unrecognized_csi": false, 00:11:32.161 "method": "bdev_nvme_attach_controller", 00:11:32.161 "req_id": 1 00:11:32.161 } 00:11:32.161 Got JSON-RPC error response 00:11:32.161 response: 00:11:32.161 { 00:11:32.161 "code": -5, 00:11:32.161 "message": "Input/output error" 00:11:32.161 } 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:32.161 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.162 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.162 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.162 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.420 request: 00:11:32.420 { 00:11:32.420 "name": "nvme0", 00:11:32.420 "trtype": "tcp", 00:11:32.421 "traddr": "10.0.0.3", 00:11:32.421 "adrfam": "ipv4", 00:11:32.421 "trsvcid": "4420", 00:11:32.421 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:32.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:32.421 "prchk_reftag": false, 00:11:32.421 "prchk_guard": false, 00:11:32.421 "hdgst": false, 00:11:32.421 "ddgst": false, 00:11:32.421 "dhchap_key": "key1", 00:11:32.421 "dhchap_ctrlr_key": "ckey1", 00:11:32.421 "allow_unrecognized_csi": false, 00:11:32.421 "method": "bdev_nvme_attach_controller", 00:11:32.421 "req_id": 1 00:11:32.421 } 00:11:32.421 Got JSON-RPC error response 00:11:32.421 response: 00:11:32.421 { 00:11:32.421 "code": -5, 00:11:32.421 "message": "Input/output error" 00:11:32.421 } 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.421 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66807 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66807 ']' 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66807 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66807 00:11:32.680 killing process with pid 66807 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66807' 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66807 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66807 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69767 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69767 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69767 ']' 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.680 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:32.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 69767 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69767 ']' 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.940 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.199 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.199 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:33.199 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:11:33.199 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.199 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 null0 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WOA 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.qId ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qId 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NpK 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.no7 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.no7 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kR0 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.F2Q ]] 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.F2Q 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vuv 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.459 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:34.397 nvme0n1 00:11:34.397 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.397 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.397 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.659 { 00:11:34.659 "cntlid": 1, 00:11:34.659 "qid": 0, 00:11:34.659 "state": "enabled", 00:11:34.659 "thread": "nvmf_tgt_poll_group_000", 00:11:34.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:34.659 "listen_address": { 00:11:34.659 "trtype": "TCP", 00:11:34.659 "adrfam": "IPv4", 00:11:34.659 "traddr": "10.0.0.3", 00:11:34.659 "trsvcid": "4420" 00:11:34.659 }, 00:11:34.659 "peer_address": { 00:11:34.659 "trtype": "TCP", 00:11:34.659 "adrfam": "IPv4", 00:11:34.659 "traddr": "10.0.0.1", 00:11:34.659 "trsvcid": "34232" 00:11:34.659 }, 00:11:34.659 "auth": { 00:11:34.659 "state": "completed", 00:11:34.659 "digest": "sha512", 00:11:34.659 "dhgroup": "ffdhe8192" 00:11:34.659 } 00:11:34.659 } 00:11:34.659 ]' 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.659 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.967 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:34.967 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key3 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.929 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.189 request: 00:11:36.189 { 00:11:36.189 "name": "nvme0", 00:11:36.189 "trtype": "tcp", 00:11:36.189 "traddr": "10.0.0.3", 00:11:36.189 "adrfam": "ipv4", 00:11:36.189 "trsvcid": "4420", 00:11:36.189 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:36.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:36.189 "prchk_reftag": false, 00:11:36.189 "prchk_guard": false, 00:11:36.189 "hdgst": false, 00:11:36.189 "ddgst": false, 00:11:36.189 "dhchap_key": "key3", 00:11:36.189 "allow_unrecognized_csi": false, 00:11:36.189 "method": "bdev_nvme_attach_controller", 00:11:36.189 "req_id": 1 00:11:36.189 } 00:11:36.189 Got JSON-RPC error response 00:11:36.189 response: 00:11:36.189 { 00:11:36.189 "code": -5, 00:11:36.189 "message": "Input/output error" 00:11:36.189 } 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:36.189 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.757 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.757 request: 00:11:36.757 { 00:11:36.757 "name": "nvme0", 00:11:36.757 "trtype": "tcp", 00:11:36.757 "traddr": "10.0.0.3", 00:11:36.757 "adrfam": "ipv4", 00:11:36.757 "trsvcid": "4420", 00:11:36.757 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:36.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:36.757 "prchk_reftag": false, 00:11:36.757 "prchk_guard": false, 00:11:36.757 "hdgst": false, 00:11:36.757 "ddgst": false, 00:11:36.757 "dhchap_key": "key3", 00:11:36.757 "allow_unrecognized_csi": false, 00:11:36.757 "method": "bdev_nvme_attach_controller", 00:11:36.757 "req_id": 1 00:11:36.757 } 00:11:36.757 Got JSON-RPC error response 00:11:36.757 response: 00:11:36.757 { 00:11:36.757 "code": -5, 00:11:36.757 "message": "Input/output error" 00:11:36.757 } 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.016 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.275 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.275 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:37.276 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:37.535 request: 00:11:37.535 { 00:11:37.535 "name": "nvme0", 00:11:37.535 "trtype": "tcp", 00:11:37.535 "traddr": "10.0.0.3", 00:11:37.535 "adrfam": "ipv4", 00:11:37.535 "trsvcid": "4420", 00:11:37.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:37.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:37.535 "prchk_reftag": false, 00:11:37.535 "prchk_guard": false, 00:11:37.535 "hdgst": false, 00:11:37.535 "ddgst": false, 00:11:37.535 "dhchap_key": "key0", 00:11:37.535 "dhchap_ctrlr_key": "key1", 00:11:37.535 "allow_unrecognized_csi": false, 00:11:37.535 "method": "bdev_nvme_attach_controller", 00:11:37.535 "req_id": 1 00:11:37.535 } 00:11:37.535 Got JSON-RPC error response 00:11:37.535 response: 00:11:37.535 { 00:11:37.535 "code": -5, 00:11:37.535 "message": "Input/output error" 00:11:37.535 } 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:37.535 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:37.795 nvme0n1 00:11:37.795 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:11:37.795 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:11:37.795 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.365 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.365 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.365 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:38.623 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:39.560 nvme0n1 00:11:39.560 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:11:39.560 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:11:39.560 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.560 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.560 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:39.560 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.560 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:39.818 12:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid 539e2455-b2a8-46ce-bfce-40a317783b05 -l 0 --dhchap-secret DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: --dhchap-ctrl-secret DHHC-1:03:MWQ2ODdiM2ViY2FjMDRjYjE0NDQ0OWI3Mzc0YzRmYTExZTkwYzA0YzIwZDkyN2NiNTVmNTRiZmVkMDExZDJlNoLnbpU=: 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.754 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.011 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:41.012 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:41.012 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:41.576 request: 00:11:41.576 { 00:11:41.576 "name": "nvme0", 00:11:41.576 "trtype": "tcp", 00:11:41.576 "traddr": "10.0.0.3", 00:11:41.576 "adrfam": "ipv4", 00:11:41.576 "trsvcid": "4420", 00:11:41.576 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:41.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05", 00:11:41.576 "prchk_reftag": false, 00:11:41.576 "prchk_guard": false, 00:11:41.576 "hdgst": false, 00:11:41.576 "ddgst": false, 00:11:41.576 "dhchap_key": "key1", 00:11:41.576 "allow_unrecognized_csi": false, 00:11:41.576 "method": "bdev_nvme_attach_controller", 00:11:41.576 "req_id": 1 00:11:41.576 } 00:11:41.576 Got JSON-RPC error response 00:11:41.576 response: 00:11:41.576 { 00:11:41.576 "code": -5, 00:11:41.576 "message": "Input/output error" 00:11:41.576 } 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:41.576 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:42.509 nvme0n1 00:11:42.509 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:11:42.509 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:11:42.509 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.509 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.509 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.509 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:43.075 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:43.075 nvme0n1 00:11:43.333 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:11:43.333 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:11:43.333 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.592 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.592 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.592 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: '' 2s 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: ]] 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWE3M2E3YjZhMGM3Y2Q4YTM4MmI1NWFkOWEwYWNkYzRcvYC7: 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:43.849 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key1 --dhchap-ctrlr-key key2 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: 2s 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: ]] 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmZkNjg5YzRlNWQ2ODViZmE4NDg0MDI3M2MxZGIzNWE1YmJkZjBjMGMyMGY3N2Ezst8DsQ==: 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:45.751 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:48.288 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:48.855 nvme0n1 00:11:48.855 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:48.855 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.855 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.855 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.855 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:48.855 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:49.423 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:11:49.423 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.423 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:11:49.682 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:11:49.941 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:11:49.941 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:11:49.941 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:50.510 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:50.768 request: 00:11:50.768 { 00:11:50.768 "name": "nvme0", 00:11:50.768 "dhchap_key": "key1", 00:11:50.768 "dhchap_ctrlr_key": "key3", 00:11:50.768 "method": "bdev_nvme_set_keys", 00:11:50.768 "req_id": 1 00:11:50.768 } 00:11:50.768 Got JSON-RPC error response 00:11:50.768 response: 00:11:50.768 { 00:11:50.768 "code": -13, 00:11:50.768 "message": "Permission denied" 00:11:50.768 } 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.768 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:51.335 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:11:51.335 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:11:52.272 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:52.272 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.273 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:52.533 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:53.470 nvme0n1 00:11:53.470 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:53.470 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:53.471 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:53.729 request: 00:11:53.729 { 00:11:53.729 "name": "nvme0", 00:11:53.729 "dhchap_key": "key2", 00:11:53.729 "dhchap_ctrlr_key": "key0", 00:11:53.730 "method": "bdev_nvme_set_keys", 00:11:53.730 "req_id": 1 00:11:53.730 } 00:11:53.730 Got JSON-RPC error response 00:11:53.730 response: 00:11:53.730 { 00:11:53.730 "code": -13, 00:11:53.730 "message": "Permission denied" 00:11:53.730 } 00:11:53.730 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:53.730 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.730 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.730 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:53.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:53.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:11:53.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:11:55.366 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:55.366 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:55.366 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.366 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66827 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66827 ']' 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66827 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66827 00:11:55.367 killing process with pid 66827 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66827' 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66827 00:11:55.367 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66827 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.626 rmmod nvme_tcp 00:11:55.626 rmmod nvme_fabrics 00:11:55.626 rmmod nvme_keyring 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 69767 ']' 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 69767 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69767 ']' 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69767 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69767 00:11:55.626 killing process with pid 69767 00:11:55.626 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.627 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.627 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69767' 00:11:55.627 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69767 00:11:55.627 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69767 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:55.886 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WOA /tmp/spdk.key-sha256.NpK /tmp/spdk.key-sha384.kR0 /tmp/spdk.key-sha512.vuv /tmp/spdk.key-sha512.qId /tmp/spdk.key-sha384.no7 /tmp/spdk.key-sha256.F2Q '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:11:56.146 00:11:56.146 real 2m57.802s 00:11:56.146 user 7m7.194s 00:11:56.146 sys 0m26.854s 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 ************************************ 00:11:56.146 END TEST nvmf_auth_target 00:11:56.146 ************************************ 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.146 ************************************ 00:11:56.146 START TEST nvmf_bdevio_no_huge 00:11:56.146 ************************************ 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:56.146 * Looking for test storage... 00:11:56.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.146 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.406 --rc genhtml_branch_coverage=1 00:11:56.406 --rc genhtml_function_coverage=1 00:11:56.406 --rc genhtml_legend=1 00:11:56.406 --rc geninfo_all_blocks=1 00:11:56.406 --rc geninfo_unexecuted_blocks=1 00:11:56.406 00:11:56.406 ' 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.406 --rc genhtml_branch_coverage=1 00:11:56.406 --rc genhtml_function_coverage=1 00:11:56.406 --rc genhtml_legend=1 00:11:56.406 --rc geninfo_all_blocks=1 00:11:56.406 --rc geninfo_unexecuted_blocks=1 00:11:56.406 00:11:56.406 ' 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.406 --rc genhtml_branch_coverage=1 00:11:56.406 --rc genhtml_function_coverage=1 00:11:56.406 --rc genhtml_legend=1 00:11:56.406 --rc geninfo_all_blocks=1 00:11:56.406 --rc geninfo_unexecuted_blocks=1 00:11:56.406 00:11:56.406 ' 00:11:56.406 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.407 --rc genhtml_branch_coverage=1 00:11:56.407 --rc genhtml_function_coverage=1 00:11:56.407 --rc genhtml_legend=1 00:11:56.407 --rc geninfo_all_blocks=1 00:11:56.407 --rc geninfo_unexecuted_blocks=1 00:11:56.407 00:11:56.407 ' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:56.407 Cannot find device "nvmf_init_br" 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:56.407 Cannot find device "nvmf_init_br2" 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:56.407 Cannot find device "nvmf_tgt_br" 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.407 Cannot find device "nvmf_tgt_br2" 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:56.407 Cannot find device "nvmf_init_br" 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:11:56.407 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:56.407 Cannot find device "nvmf_init_br2" 00:11:56.407 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:56.408 Cannot find device "nvmf_tgt_br" 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:56.408 Cannot find device "nvmf_tgt_br2" 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:56.408 Cannot find device "nvmf_br" 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:56.408 Cannot find device "nvmf_init_if" 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:11:56.408 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:56.667 Cannot find device "nvmf_init_if2" 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:56.667 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:56.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:56.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:56.927 00:11:56.927 --- 10.0.0.3 ping statistics --- 00:11:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.927 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:56.927 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:56.927 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:11:56.927 00:11:56.927 --- 10.0.0.4 ping statistics --- 00:11:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.927 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:56.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:56.927 00:11:56.927 --- 10.0.0.1 ping statistics --- 00:11:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.927 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:56.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:56.927 00:11:56.927 --- 10.0.0.2 ping statistics --- 00:11:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.927 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70396 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70396 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70396 ']' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.927 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:56.927 [2024-12-06 12:19:43.442614] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:11:56.927 [2024-12-06 12:19:43.442724] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:57.186 [2024-12-06 12:19:43.605273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.186 [2024-12-06 12:19:43.676755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.186 [2024-12-06 12:19:43.676820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.186 [2024-12-06 12:19:43.676834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.186 [2024-12-06 12:19:43.676845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.186 [2024-12-06 12:19:43.676854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.186 [2024-12-06 12:19:43.677769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:57.186 [2024-12-06 12:19:43.677911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:57.186 [2024-12-06 12:19:43.678031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:57.186 [2024-12-06 12:19:43.678550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.186 [2024-12-06 12:19:43.684437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.755 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.755 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:11:57.755 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.755 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.755 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 [2024-12-06 12:19:44.439152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 Malloc0 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.016 [2024-12-06 12:19:44.480925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:58.016 { 00:11:58.016 "params": { 00:11:58.016 "name": "Nvme$subsystem", 00:11:58.016 "trtype": "$TEST_TRANSPORT", 00:11:58.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.016 "adrfam": "ipv4", 00:11:58.016 "trsvcid": "$NVMF_PORT", 00:11:58.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.016 "hdgst": ${hdgst:-false}, 00:11:58.016 "ddgst": ${ddgst:-false} 00:11:58.016 }, 00:11:58.016 "method": "bdev_nvme_attach_controller" 00:11:58.016 } 00:11:58.016 EOF 00:11:58.016 )") 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:11:58.016 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:58.016 "params": { 00:11:58.016 "name": "Nvme1", 00:11:58.016 "trtype": "tcp", 00:11:58.016 "traddr": "10.0.0.3", 00:11:58.016 "adrfam": "ipv4", 00:11:58.016 "trsvcid": "4420", 00:11:58.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.016 "hdgst": false, 00:11:58.016 "ddgst": false 00:11:58.016 }, 00:11:58.016 "method": "bdev_nvme_attach_controller" 00:11:58.016 }' 00:11:58.016 [2024-12-06 12:19:44.543641] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:11:58.016 [2024-12-06 12:19:44.543744] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70432 ] 00:11:58.276 [2024-12-06 12:19:44.701887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.276 [2024-12-06 12:19:44.756209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.276 [2024-12-06 12:19:44.756314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.276 [2024-12-06 12:19:44.756321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.276 [2024-12-06 12:19:44.769131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:58.535 I/O targets: 00:11:58.535 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:58.535 00:11:58.535 00:11:58.535 CUnit - A unit testing framework for C - Version 2.1-3 00:11:58.535 http://cunit.sourceforge.net/ 00:11:58.535 00:11:58.535 00:11:58.535 Suite: bdevio tests on: Nvme1n1 00:11:58.535 Test: blockdev write read block ...passed 00:11:58.535 Test: blockdev write zeroes read block ...passed 00:11:58.535 Test: blockdev write zeroes read no split ...passed 00:11:58.535 Test: blockdev write zeroes read split ...passed 00:11:58.535 Test: blockdev write zeroes read split partial ...passed 00:11:58.535 Test: blockdev reset ...[2024-12-06 12:19:44.980243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:58.535 [2024-12-06 12:19:44.980541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b1e90 (9): Bad file descriptor 00:11:58.535 [2024-12-06 12:19:45.000360] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:58.535 passed 00:11:58.535 Test: blockdev write read 8 blocks ...passed 00:11:58.535 Test: blockdev write read size > 128k ...passed 00:11:58.535 Test: blockdev write read invalid size ...passed 00:11:58.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:58.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:58.535 Test: blockdev write read max offset ...passed 00:11:58.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:58.535 Test: blockdev writev readv 8 blocks ...passed 00:11:58.535 Test: blockdev writev readv 30 x 1block ...passed 00:11:58.535 Test: blockdev writev readv block ...passed 00:11:58.535 Test: blockdev writev readv size > 128k ...passed 00:11:58.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:58.535 Test: blockdev comparev and writev ...[2024-12-06 12:19:45.008359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.535 [2024-12-06 12:19:45.008418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.008439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.008449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:58.536 passed 00:11:58.536 Test: blockdev nvme passthru rw ...[2024-12-06 12:19:45.008983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.009022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.009038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.009047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.009357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.009376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.009392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.009402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.009676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.009708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:58.536 [2024-12-06 12:19:45.009718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:58.536 passed 00:11:58.536 Test: blockdev nvme passthru vendor specific ...[2024-12-06 12:19:45.010538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.536 [2024-12-06 12:19:45.010564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.010681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.536 [2024-12-06 12:19:45.010698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.010810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.536 [2024-12-06 12:19:45.010826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:58.536 [2024-12-06 12:19:45.010920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:58.536 [2024-12-06 12:19:45.010936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:58.536 passed 00:11:58.536 Test: blockdev nvme admin passthru ...passed 00:11:58.536 Test: blockdev copy ...passed 00:11:58.536 00:11:58.536 Run Summary: Type Total Ran Passed Failed Inactive 00:11:58.536 suites 1 1 n/a 0 0 00:11:58.536 tests 23 23 23 0 0 00:11:58.536 asserts 152 152 152 0 n/a 00:11:58.536 00:11:58.536 Elapsed time = 0.181 seconds 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.794 rmmod nvme_tcp 00:11:58.794 rmmod nvme_fabrics 00:11:58.794 rmmod nvme_keyring 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70396 ']' 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70396 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70396 ']' 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70396 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70396 00:11:58.794 killing process with pid 70396 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70396' 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70396 00:11:58.794 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70396 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.360 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.360 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:11:59.360 00:11:59.360 real 0m3.341s 00:11:59.360 user 0m9.664s 00:11:59.360 sys 0m1.240s 00:11:59.360 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.360 ************************************ 00:11:59.360 END TEST nvmf_bdevio_no_huge 00:11:59.360 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:59.360 ************************************ 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.620 ************************************ 00:11:59.620 START TEST nvmf_tls 00:11:59.620 ************************************ 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:59.620 * Looking for test storage... 00:11:59.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.620 --rc genhtml_branch_coverage=1 00:11:59.620 --rc genhtml_function_coverage=1 00:11:59.620 --rc genhtml_legend=1 00:11:59.620 --rc geninfo_all_blocks=1 00:11:59.620 --rc geninfo_unexecuted_blocks=1 00:11:59.620 00:11:59.620 ' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.620 --rc genhtml_branch_coverage=1 00:11:59.620 --rc genhtml_function_coverage=1 00:11:59.620 --rc genhtml_legend=1 00:11:59.620 --rc geninfo_all_blocks=1 00:11:59.620 --rc geninfo_unexecuted_blocks=1 00:11:59.620 00:11:59.620 ' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.620 --rc genhtml_branch_coverage=1 00:11:59.620 --rc genhtml_function_coverage=1 00:11:59.620 --rc genhtml_legend=1 00:11:59.620 --rc geninfo_all_blocks=1 00:11:59.620 --rc geninfo_unexecuted_blocks=1 00:11:59.620 00:11:59.620 ' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.620 --rc genhtml_branch_coverage=1 00:11:59.620 --rc genhtml_function_coverage=1 00:11:59.620 --rc genhtml_legend=1 00:11:59.620 --rc geninfo_all_blocks=1 00:11:59.620 --rc geninfo_unexecuted_blocks=1 00:11:59.620 00:11:59.620 ' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.620 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.621 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:59.621 Cannot find device "nvmf_init_br" 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:11:59.621 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:59.880 Cannot find device "nvmf_init_br2" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:59.880 Cannot find device "nvmf_tgt_br" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.880 Cannot find device "nvmf_tgt_br2" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:59.880 Cannot find device "nvmf_init_br" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:59.880 Cannot find device "nvmf_init_br2" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:59.880 Cannot find device "nvmf_tgt_br" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:59.880 Cannot find device "nvmf_tgt_br2" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:59.880 Cannot find device "nvmf_br" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:59.880 Cannot find device "nvmf_init_if" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:59.880 Cannot find device "nvmf_init_if2" 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:59.880 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:59.881 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:00.140 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:00.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:00.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:00.141 00:12:00.141 --- 10.0.0.3 ping statistics --- 00:12:00.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.141 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:00.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:00.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:12:00.141 00:12:00.141 --- 10.0.0.4 ping statistics --- 00:12:00.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.141 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:00.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:00.141 00:12:00.141 --- 10.0.0.1 ping statistics --- 00:12:00.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.141 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:00.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:00.141 00:12:00.141 --- 10.0.0.2 ping statistics --- 00:12:00.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.141 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70662 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70662 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70662 ']' 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.141 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:00.141 [2024-12-06 12:19:46.694075] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:00.141 [2024-12-06 12:19:46.694151] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.400 [2024-12-06 12:19:46.848032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.400 [2024-12-06 12:19:46.887116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.400 [2024-12-06 12:19:46.887194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.400 [2024-12-06 12:19:46.887210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.400 [2024-12-06 12:19:46.887220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.400 [2024-12-06 12:19:46.887239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.400 [2024-12-06 12:19:46.887619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:00.400 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:00.660 true 00:12:00.660 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:00.660 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:00.919 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:00.919 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:00.919 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:01.178 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:01.178 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:01.436 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:01.436 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:01.436 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:01.696 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:01.696 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:01.954 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:01.955 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:01.955 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:01.955 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:02.213 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:02.213 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:02.213 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:02.472 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:02.472 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:02.731 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:02.731 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:02.731 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:02.992 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:02.992 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:03.263 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:03.263 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9q2xGetvlB 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6DLrBNSnHD 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9q2xGetvlB 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6DLrBNSnHD 00:12:03.264 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:03.542 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:04.121 [2024-12-06 12:19:50.476269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.121 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9q2xGetvlB 00:12:04.121 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9q2xGetvlB 00:12:04.121 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:04.121 [2024-12-06 12:19:50.714523] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.121 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:04.379 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:04.636 [2024-12-06 12:19:51.142589] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:04.637 [2024-12-06 12:19:51.142790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:04.637 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:04.894 malloc0 00:12:04.894 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:05.152 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9q2xGetvlB 00:12:05.152 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:05.409 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9q2xGetvlB 00:12:17.611 Initializing NVMe Controllers 00:12:17.611 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:17.611 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:17.611 Initialization complete. Launching workers. 00:12:17.611 ======================================================== 00:12:17.611 Latency(us) 00:12:17.611 Device Information : IOPS MiB/s Average min max 00:12:17.611 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11192.68 43.72 5719.04 1572.34 9275.22 00:12:17.611 ======================================================== 00:12:17.611 Total : 11192.68 43.72 5719.04 1572.34 9275.22 00:12:17.611 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q2xGetvlB 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9q2xGetvlB 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70888 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70888 /var/tmp/bdevperf.sock 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70888 ']' 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:17.612 [2024-12-06 12:20:02.260302] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:17.612 [2024-12-06 12:20:02.260414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70888 ] 00:12:17.612 [2024-12-06 12:20:02.414072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.612 [2024-12-06 12:20:02.452721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.612 [2024-12-06 12:20:02.486540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9q2xGetvlB 00:12:17.612 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:17.612 [2024-12-06 12:20:02.997330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:17.612 TLSTESTn1 00:12:17.612 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:17.612 Running I/O for 10 seconds... 00:12:18.999 4650.00 IOPS, 18.16 MiB/s [2024-12-06T12:20:06.224Z] 4674.50 IOPS, 18.26 MiB/s [2024-12-06T12:20:07.601Z] 4744.00 IOPS, 18.53 MiB/s [2024-12-06T12:20:08.538Z] 4763.75 IOPS, 18.61 MiB/s [2024-12-06T12:20:09.475Z] 4783.40 IOPS, 18.69 MiB/s [2024-12-06T12:20:10.412Z] 4792.67 IOPS, 18.72 MiB/s [2024-12-06T12:20:11.349Z] 4796.43 IOPS, 18.74 MiB/s [2024-12-06T12:20:12.283Z] 4801.12 IOPS, 18.75 MiB/s [2024-12-06T12:20:13.220Z] 4806.22 IOPS, 18.77 MiB/s [2024-12-06T12:20:13.479Z] 4813.00 IOPS, 18.80 MiB/s 00:12:26.821 Latency(us) 00:12:26.821 [2024-12-06T12:20:13.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.821 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:26.821 Verification LBA range: start 0x0 length 0x2000 00:12:26.821 TLSTESTn1 : 10.02 4817.75 18.82 0.00 0.00 26521.12 5749.29 20375.74 00:12:26.821 [2024-12-06T12:20:13.479Z] =================================================================================================================== 00:12:26.821 [2024-12-06T12:20:13.479Z] Total : 4817.75 18.82 0.00 0.00 26521.12 5749.29 20375.74 00:12:26.821 { 00:12:26.821 "results": [ 00:12:26.821 { 00:12:26.821 "job": "TLSTESTn1", 00:12:26.821 "core_mask": "0x4", 00:12:26.821 "workload": "verify", 00:12:26.821 "status": "finished", 00:12:26.821 "verify_range": { 00:12:26.821 "start": 0, 00:12:26.821 "length": 8192 00:12:26.821 }, 00:12:26.821 "queue_depth": 128, 00:12:26.821 "io_size": 4096, 00:12:26.821 "runtime": 10.015871, 00:12:26.821 "iops": 4817.75374303443, 00:12:26.821 "mibps": 18.819350558728242, 00:12:26.821 "io_failed": 0, 00:12:26.821 "io_timeout": 0, 00:12:26.821 "avg_latency_us": 26521.11670049021, 00:12:26.821 "min_latency_us": 5749.294545454545, 00:12:26.821 "max_latency_us": 20375.738181818182 00:12:26.821 } 00:12:26.821 ], 00:12:26.821 "core_count": 1 00:12:26.821 } 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70888 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70888 ']' 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70888 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70888 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:26.821 killing process with pid 70888 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70888' 00:12:26.821 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.821 00:12:26.821 Latency(us) 00:12:26.821 [2024-12-06T12:20:13.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.821 [2024-12-06T12:20:13.479Z] =================================================================================================================== 00:12:26.821 [2024-12-06T12:20:13.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70888 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70888 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6DLrBNSnHD 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6DLrBNSnHD 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6DLrBNSnHD 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6DLrBNSnHD 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71017 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:26.821 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71017 /var/tmp/bdevperf.sock 00:12:26.822 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71017 ']' 00:12:26.822 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.822 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.822 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.822 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.822 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:26.822 [2024-12-06 12:20:13.461577] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:26.822 [2024-12-06 12:20:13.461681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71017 ] 00:12:27.079 [2024-12-06 12:20:13.597861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.079 [2024-12-06 12:20:13.626573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.079 [2024-12-06 12:20:13.654579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:27.079 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.079 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:27.080 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6DLrBNSnHD 00:12:27.337 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:27.595 [2024-12-06 12:20:14.233387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:27.595 [2024-12-06 12:20:14.242378] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:27.595 [2024-12-06 12:20:14.242755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a030 (107): Transport endpoint is not connected 00:12:27.595 [2024-12-06 12:20:14.243746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124a030 (9): Bad file descriptor 00:12:27.595 [2024-12-06 12:20:14.244743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:27.595 [2024-12-06 12:20:14.244779] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:27.595 [2024-12-06 12:20:14.244815] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:27.595 [2024-12-06 12:20:14.244837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:27.595 request: 00:12:27.595 { 00:12:27.595 "name": "TLSTEST", 00:12:27.595 "trtype": "tcp", 00:12:27.595 "traddr": "10.0.0.3", 00:12:27.595 "adrfam": "ipv4", 00:12:27.595 "trsvcid": "4420", 00:12:27.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.595 "prchk_reftag": false, 00:12:27.595 "prchk_guard": false, 00:12:27.595 "hdgst": false, 00:12:27.595 "ddgst": false, 00:12:27.595 "psk": "key0", 00:12:27.595 "allow_unrecognized_csi": false, 00:12:27.595 "method": "bdev_nvme_attach_controller", 00:12:27.595 "req_id": 1 00:12:27.595 } 00:12:27.595 Got JSON-RPC error response 00:12:27.595 response: 00:12:27.595 { 00:12:27.595 "code": -5, 00:12:27.595 "message": "Input/output error" 00:12:27.595 } 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71017 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71017 ']' 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71017 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71017 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:27.854 killing process with pid 71017 00:12:27.854 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71017' 00:12:27.854 Received shutdown signal, test time was about 10.000000 seconds 00:12:27.855 00:12:27.855 Latency(us) 00:12:27.855 [2024-12-06T12:20:14.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.855 [2024-12-06T12:20:14.513Z] =================================================================================================================== 00:12:27.855 [2024-12-06T12:20:14.513Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71017 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71017 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9q2xGetvlB 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9q2xGetvlB 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9q2xGetvlB 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9q2xGetvlB 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71037 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71037 /var/tmp/bdevperf.sock 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71037 ']' 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.855 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:27.855 [2024-12-06 12:20:14.466488] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:27.855 [2024-12-06 12:20:14.466591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71037 ] 00:12:28.113 [2024-12-06 12:20:14.603349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.113 [2024-12-06 12:20:14.632288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.113 [2024-12-06 12:20:14.660659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:28.113 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.113 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:28.113 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9q2xGetvlB 00:12:28.371 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:12:28.633 [2024-12-06 12:20:15.199371] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:28.633 [2024-12-06 12:20:15.203848] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:28.633 [2024-12-06 12:20:15.203900] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:28.633 [2024-12-06 12:20:15.203961] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:28.633 [2024-12-06 12:20:15.204634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5030 (107): Transport endpoint is not connected 00:12:28.633 [2024-12-06 12:20:15.205631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5030 (9): Bad file descriptor 00:12:28.633 [2024-12-06 12:20:15.206628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:28.633 [2024-12-06 12:20:15.206664] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:28.633 [2024-12-06 12:20:15.206689] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:28.633 [2024-12-06 12:20:15.206703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:28.633 request: 00:12:28.633 { 00:12:28.633 "name": "TLSTEST", 00:12:28.633 "trtype": "tcp", 00:12:28.633 "traddr": "10.0.0.3", 00:12:28.633 "adrfam": "ipv4", 00:12:28.633 "trsvcid": "4420", 00:12:28.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:28.633 "prchk_reftag": false, 00:12:28.633 "prchk_guard": false, 00:12:28.633 "hdgst": false, 00:12:28.633 "ddgst": false, 00:12:28.633 "psk": "key0", 00:12:28.633 "allow_unrecognized_csi": false, 00:12:28.633 "method": "bdev_nvme_attach_controller", 00:12:28.633 "req_id": 1 00:12:28.633 } 00:12:28.633 Got JSON-RPC error response 00:12:28.633 response: 00:12:28.633 { 00:12:28.633 "code": -5, 00:12:28.633 "message": "Input/output error" 00:12:28.633 } 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71037 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71037 ']' 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71037 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71037 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:28.633 killing process with pid 71037 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71037' 00:12:28.633 Received shutdown signal, test time was about 10.000000 seconds 00:12:28.633 00:12:28.633 Latency(us) 00:12:28.633 [2024-12-06T12:20:15.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.633 [2024-12-06T12:20:15.291Z] =================================================================================================================== 00:12:28.633 [2024-12-06T12:20:15.291Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71037 00:12:28.633 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71037 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q2xGetvlB 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q2xGetvlB 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q2xGetvlB 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9q2xGetvlB 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71054 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71054 /var/tmp/bdevperf.sock 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71054 ']' 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.891 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:28.891 [2024-12-06 12:20:15.422866] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:28.891 [2024-12-06 12:20:15.422968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:12:29.150 [2024-12-06 12:20:15.560871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.150 [2024-12-06 12:20:15.590370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.150 [2024-12-06 12:20:15.618357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:29.150 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.150 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:29.150 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9q2xGetvlB 00:12:29.408 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:29.667 [2024-12-06 12:20:16.092811] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:29.667 [2024-12-06 12:20:16.102444] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:29.667 [2024-12-06 12:20:16.102497] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:29.667 [2024-12-06 12:20:16.102560] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:29.667 [2024-12-06 12:20:16.103162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd1030 (107): Transport endpoint is not connected 00:12:29.667 [2024-12-06 12:20:16.104153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd1030 (9): Bad file descriptor 00:12:29.667 [2024-12-06 12:20:16.105150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:12:29.667 [2024-12-06 12:20:16.105210] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:29.667 [2024-12-06 12:20:16.105222] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:12:29.667 [2024-12-06 12:20:16.105236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:12:29.667 request: 00:12:29.667 { 00:12:29.667 "name": "TLSTEST", 00:12:29.667 "trtype": "tcp", 00:12:29.667 "traddr": "10.0.0.3", 00:12:29.667 "adrfam": "ipv4", 00:12:29.667 "trsvcid": "4420", 00:12:29.667 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:29.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:29.667 "prchk_reftag": false, 00:12:29.667 "prchk_guard": false, 00:12:29.667 "hdgst": false, 00:12:29.667 "ddgst": false, 00:12:29.667 "psk": "key0", 00:12:29.667 "allow_unrecognized_csi": false, 00:12:29.667 "method": "bdev_nvme_attach_controller", 00:12:29.667 "req_id": 1 00:12:29.667 } 00:12:29.667 Got JSON-RPC error response 00:12:29.667 response: 00:12:29.667 { 00:12:29.667 "code": -5, 00:12:29.667 "message": "Input/output error" 00:12:29.667 } 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71054 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71054 ']' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71054 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71054 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:29.667 killing process with pid 71054 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71054' 00:12:29.667 Received shutdown signal, test time was about 10.000000 seconds 00:12:29.667 00:12:29.667 Latency(us) 00:12:29.667 [2024-12-06T12:20:16.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.667 [2024-12-06T12:20:16.325Z] =================================================================================================================== 00:12:29.667 [2024-12-06T12:20:16.325Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71054 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71054 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71075 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71075 /var/tmp/bdevperf.sock 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71075 ']' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.667 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:29.926 [2024-12-06 12:20:16.334393] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:29.926 [2024-12-06 12:20:16.334508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71075 ] 00:12:29.926 [2024-12-06 12:20:16.475976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.926 [2024-12-06 12:20:16.504951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.926 [2024-12-06 12:20:16.533023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:30.862 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.862 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:30.862 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:12:30.862 [2024-12-06 12:20:17.471116] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:12:30.862 [2024-12-06 12:20:17.471199] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:30.862 request: 00:12:30.862 { 00:12:30.862 "name": "key0", 00:12:30.862 "path": "", 00:12:30.862 "method": "keyring_file_add_key", 00:12:30.862 "req_id": 1 00:12:30.862 } 00:12:30.862 Got JSON-RPC error response 00:12:30.862 response: 00:12:30.862 { 00:12:30.862 "code": -1, 00:12:30.862 "message": "Operation not permitted" 00:12:30.862 } 00:12:30.862 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:31.122 [2024-12-06 12:20:17.751345] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:31.122 [2024-12-06 12:20:17.751416] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:31.122 request: 00:12:31.122 { 00:12:31.122 "name": "TLSTEST", 00:12:31.122 "trtype": "tcp", 00:12:31.122 "traddr": "10.0.0.3", 00:12:31.122 "adrfam": "ipv4", 00:12:31.122 "trsvcid": "4420", 00:12:31.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.122 "prchk_reftag": false, 00:12:31.122 "prchk_guard": false, 00:12:31.122 "hdgst": false, 00:12:31.122 "ddgst": false, 00:12:31.122 "psk": "key0", 00:12:31.122 "allow_unrecognized_csi": false, 00:12:31.122 "method": "bdev_nvme_attach_controller", 00:12:31.122 "req_id": 1 00:12:31.122 } 00:12:31.122 Got JSON-RPC error response 00:12:31.122 response: 00:12:31.122 { 00:12:31.122 "code": -126, 00:12:31.122 "message": "Required key not available" 00:12:31.122 } 00:12:31.122 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71075 00:12:31.122 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71075 ']' 00:12:31.122 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71075 00:12:31.122 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:31.122 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.122 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71075 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:31.382 killing process with pid 71075 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71075' 00:12:31.382 Received shutdown signal, test time was about 10.000000 seconds 00:12:31.382 00:12:31.382 Latency(us) 00:12:31.382 [2024-12-06T12:20:18.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.382 [2024-12-06T12:20:18.040Z] =================================================================================================================== 00:12:31.382 [2024-12-06T12:20:18.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71075 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71075 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70662 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70662 ']' 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70662 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70662 00:12:31.382 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:31.383 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:31.383 killing process with pid 70662 00:12:31.383 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70662' 00:12:31.383 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70662 00:12:31.383 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70662 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.CyTxKSqpAN 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.CyTxKSqpAN 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71119 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71119 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71119 ']' 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.642 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:31.642 [2024-12-06 12:20:18.206268] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:31.642 [2024-12-06 12:20:18.206378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.901 [2024-12-06 12:20:18.345303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.901 [2024-12-06 12:20:18.371778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.901 [2024-12-06 12:20:18.371842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.901 [2024-12-06 12:20:18.371868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.901 [2024-12-06 12:20:18.371875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.901 [2024-12-06 12:20:18.371881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.901 [2024-12-06 12:20:18.372202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.901 [2024-12-06 12:20:18.401095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.CyTxKSqpAN 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CyTxKSqpAN 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:32.840 [2024-12-06 12:20:19.379617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.840 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:33.099 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:33.359 [2024-12-06 12:20:19.839738] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:33.359 [2024-12-06 12:20:19.839937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:33.359 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:33.618 malloc0 00:12:33.618 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:33.878 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:12:34.137 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CyTxKSqpAN 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CyTxKSqpAN 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71169 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71169 /var/tmp/bdevperf.sock 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71169 ']' 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.397 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:34.397 [2024-12-06 12:20:20.873370] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:34.397 [2024-12-06 12:20:20.873473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:12:34.397 [2024-12-06 12:20:21.021901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.656 [2024-12-06 12:20:21.063455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.656 [2024-12-06 12:20:21.097979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.226 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.226 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:35.226 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:12:35.485 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:35.744 [2024-12-06 12:20:22.285387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:35.744 TLSTESTn1 00:12:35.744 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:36.003 Running I/O for 10 seconds... 00:12:37.876 4608.00 IOPS, 18.00 MiB/s [2024-12-06T12:20:25.908Z] 4725.00 IOPS, 18.46 MiB/s [2024-12-06T12:20:26.842Z] 4759.00 IOPS, 18.59 MiB/s [2024-12-06T12:20:27.844Z] 4775.50 IOPS, 18.65 MiB/s [2024-12-06T12:20:28.786Z] 4773.00 IOPS, 18.64 MiB/s [2024-12-06T12:20:29.721Z] 4776.50 IOPS, 18.66 MiB/s [2024-12-06T12:20:30.657Z] 4775.43 IOPS, 18.65 MiB/s [2024-12-06T12:20:31.594Z] 4776.75 IOPS, 18.66 MiB/s [2024-12-06T12:20:32.531Z] 4779.00 IOPS, 18.67 MiB/s [2024-12-06T12:20:32.531Z] 4780.20 IOPS, 18.67 MiB/s 00:12:45.873 Latency(us) 00:12:45.873 [2024-12-06T12:20:32.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.873 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:45.873 Verification LBA range: start 0x0 length 0x2000 00:12:45.873 TLSTESTn1 : 10.01 4786.38 18.70 0.00 0.00 26697.74 4557.73 25261.15 00:12:45.873 [2024-12-06T12:20:32.531Z] =================================================================================================================== 00:12:45.873 [2024-12-06T12:20:32.531Z] Total : 4786.38 18.70 0.00 0.00 26697.74 4557.73 25261.15 00:12:45.873 { 00:12:45.873 "results": [ 00:12:45.873 { 00:12:45.873 "job": "TLSTESTn1", 00:12:45.873 "core_mask": "0x4", 00:12:45.873 "workload": "verify", 00:12:45.873 "status": "finished", 00:12:45.873 "verify_range": { 00:12:45.873 "start": 0, 00:12:45.873 "length": 8192 00:12:45.873 }, 00:12:45.873 "queue_depth": 128, 00:12:45.873 "io_size": 4096, 00:12:45.873 "runtime": 10.013407, 00:12:45.873 "iops": 4786.382896450728, 00:12:45.873 "mibps": 18.696808189260658, 00:12:45.873 "io_failed": 0, 00:12:45.873 "io_timeout": 0, 00:12:45.873 "avg_latency_us": 26697.74397353606, 00:12:45.873 "min_latency_us": 4557.730909090909, 00:12:45.873 "max_latency_us": 25261.14909090909 00:12:45.873 } 00:12:45.873 ], 00:12:45.873 "core_count": 1 00:12:45.873 } 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71169 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71169 ']' 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71169 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71169 00:12:46.132 killing process with pid 71169 00:12:46.132 Received shutdown signal, test time was about 10.000000 seconds 00:12:46.132 00:12:46.132 Latency(us) 00:12:46.132 [2024-12-06T12:20:32.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.132 [2024-12-06T12:20:32.790Z] =================================================================================================================== 00:12:46.132 [2024-12-06T12:20:32.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71169' 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71169 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71169 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.CyTxKSqpAN 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CyTxKSqpAN 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CyTxKSqpAN 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CyTxKSqpAN 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CyTxKSqpAN 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71310 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71310 /var/tmp/bdevperf.sock 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71310 ']' 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.132 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:46.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:46.133 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.133 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.133 [2024-12-06 12:20:32.756736] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:46.133 [2024-12-06 12:20:32.756991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:12:46.391 [2024-12-06 12:20:32.903014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.391 [2024-12-06 12:20:32.931301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.391 [2024-12-06 12:20:32.959336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.391 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.391 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:46.391 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:12:46.650 [2024-12-06 12:20:33.253742] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CyTxKSqpAN': 0100666 00:12:46.650 [2024-12-06 12:20:33.253946] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:46.650 request: 00:12:46.650 { 00:12:46.650 "name": "key0", 00:12:46.650 "path": "/tmp/tmp.CyTxKSqpAN", 00:12:46.650 "method": "keyring_file_add_key", 00:12:46.650 "req_id": 1 00:12:46.650 } 00:12:46.650 Got JSON-RPC error response 00:12:46.650 response: 00:12:46.650 { 00:12:46.650 "code": -1, 00:12:46.650 "message": "Operation not permitted" 00:12:46.650 } 00:12:46.650 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:46.908 [2024-12-06 12:20:33.529890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:46.908 [2024-12-06 12:20:33.529962] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:46.908 request: 00:12:46.908 { 00:12:46.908 "name": "TLSTEST", 00:12:46.908 "trtype": "tcp", 00:12:46.908 "traddr": "10.0.0.3", 00:12:46.908 "adrfam": "ipv4", 00:12:46.908 "trsvcid": "4420", 00:12:46.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:46.908 "prchk_reftag": false, 00:12:46.908 "prchk_guard": false, 00:12:46.908 "hdgst": false, 00:12:46.908 "ddgst": false, 00:12:46.908 "psk": "key0", 00:12:46.908 "allow_unrecognized_csi": false, 00:12:46.908 "method": "bdev_nvme_attach_controller", 00:12:46.908 "req_id": 1 00:12:46.908 } 00:12:46.908 Got JSON-RPC error response 00:12:46.908 response: 00:12:46.908 { 00:12:46.908 "code": -126, 00:12:46.908 "message": "Required key not available" 00:12:46.908 } 00:12:46.908 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71310 00:12:46.908 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71310 ']' 00:12:46.908 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71310 00:12:46.908 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:46.908 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.908 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71310 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71310' 00:12:47.167 killing process with pid 71310 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71310 00:12:47.167 Received shutdown signal, test time was about 10.000000 seconds 00:12:47.167 00:12:47.167 Latency(us) 00:12:47.167 [2024-12-06T12:20:33.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.167 [2024-12-06T12:20:33.825Z] =================================================================================================================== 00:12:47.167 [2024-12-06T12:20:33.825Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71310 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71119 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71119 ']' 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71119 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71119 00:12:47.167 killing process with pid 71119 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71119' 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71119 00:12:47.167 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71119 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:47.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71336 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71336 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71336 ']' 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.426 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:47.426 [2024-12-06 12:20:33.919937] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:47.426 [2024-12-06 12:20:33.920251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.426 [2024-12-06 12:20:34.057348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.686 [2024-12-06 12:20:34.084708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.686 [2024-12-06 12:20:34.084979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.686 [2024-12-06 12:20:34.085098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.686 [2024-12-06 12:20:34.085109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.686 [2024-12-06 12:20:34.085117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.686 [2024-12-06 12:20:34.085617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.686 [2024-12-06 12:20:34.113238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.CyTxKSqpAN 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CyTxKSqpAN 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.CyTxKSqpAN 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CyTxKSqpAN 00:12:48.254 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:48.518 [2024-12-06 12:20:35.147521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.518 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:49.085 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:49.085 [2024-12-06 12:20:35.647665] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:49.085 [2024-12-06 12:20:35.648048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:49.085 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:49.344 malloc0 00:12:49.344 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:49.603 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:12:49.863 [2024-12-06 12:20:36.348863] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CyTxKSqpAN': 0100666 00:12:49.863 [2024-12-06 12:20:36.348899] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:49.863 request: 00:12:49.863 { 00:12:49.863 "name": "key0", 00:12:49.863 "path": "/tmp/tmp.CyTxKSqpAN", 00:12:49.863 "method": "keyring_file_add_key", 00:12:49.863 "req_id": 1 00:12:49.863 } 00:12:49.863 Got JSON-RPC error response 00:12:49.863 response: 00:12:49.863 { 00:12:49.863 "code": -1, 00:12:49.863 "message": "Operation not permitted" 00:12:49.863 } 00:12:49.863 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:50.122 [2024-12-06 12:20:36.608924] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:12:50.122 [2024-12-06 12:20:36.608973] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:50.122 request: 00:12:50.122 { 00:12:50.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.122 "host": "nqn.2016-06.io.spdk:host1", 00:12:50.122 "psk": "key0", 00:12:50.122 "method": "nvmf_subsystem_add_host", 00:12:50.122 "req_id": 1 00:12:50.122 } 00:12:50.122 Got JSON-RPC error response 00:12:50.122 response: 00:12:50.122 { 00:12:50.122 "code": -32603, 00:12:50.122 "message": "Internal error" 00:12:50.122 } 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71336 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71336 ']' 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71336 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71336 00:12:50.122 killing process with pid 71336 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71336' 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71336 00:12:50.122 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71336 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.CyTxKSqpAN 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71405 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71405 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:50.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71405 ']' 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.381 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:50.381 [2024-12-06 12:20:36.863965] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:50.381 [2024-12-06 12:20:36.864054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.381 [2024-12-06 12:20:37.008436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.381 [2024-12-06 12:20:37.034862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.381 [2024-12-06 12:20:37.034916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.381 [2024-12-06 12:20:37.034943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.381 [2024-12-06 12:20:37.034950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.381 [2024-12-06 12:20:37.034956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.381 [2024-12-06 12:20:37.035281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.640 [2024-12-06 12:20:37.066693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.CyTxKSqpAN 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CyTxKSqpAN 00:12:51.207 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:51.775 [2024-12-06 12:20:38.125158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.775 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:51.775 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:52.032 [2024-12-06 12:20:38.609220] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:52.032 [2024-12-06 12:20:38.609422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:52.032 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:52.290 malloc0 00:12:52.290 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:52.547 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:12:52.805 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:53.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71461 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71461 /var/tmp/bdevperf.sock 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71461 ']' 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.063 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:53.063 [2024-12-06 12:20:39.593798] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:53.063 [2024-12-06 12:20:39.594104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71461 ] 00:12:53.320 [2024-12-06 12:20:39.747194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.320 [2024-12-06 12:20:39.786075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.320 [2024-12-06 12:20:39.819704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:53.884 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.884 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:53.884 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:12:54.449 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:54.449 [2024-12-06 12:20:41.014654] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:54.449 TLSTESTn1 00:12:54.449 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:55.018 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:12:55.018 "subsystems": [ 00:12:55.018 { 00:12:55.018 "subsystem": "keyring", 00:12:55.018 "config": [ 00:12:55.018 { 00:12:55.018 "method": "keyring_file_add_key", 00:12:55.018 "params": { 00:12:55.018 "name": "key0", 00:12:55.018 "path": "/tmp/tmp.CyTxKSqpAN" 00:12:55.018 } 00:12:55.018 } 00:12:55.018 ] 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "subsystem": "iobuf", 00:12:55.018 "config": [ 00:12:55.018 { 00:12:55.018 "method": "iobuf_set_options", 00:12:55.018 "params": { 00:12:55.018 "small_pool_count": 8192, 00:12:55.018 "large_pool_count": 1024, 00:12:55.018 "small_bufsize": 8192, 00:12:55.018 "large_bufsize": 135168, 00:12:55.018 "enable_numa": false 00:12:55.018 } 00:12:55.018 } 00:12:55.018 ] 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "subsystem": "sock", 00:12:55.018 "config": [ 00:12:55.018 { 00:12:55.018 "method": "sock_set_default_impl", 00:12:55.018 "params": { 00:12:55.018 "impl_name": "uring" 00:12:55.018 } 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "method": "sock_impl_set_options", 00:12:55.018 "params": { 00:12:55.018 "impl_name": "ssl", 00:12:55.018 "recv_buf_size": 4096, 00:12:55.018 "send_buf_size": 4096, 00:12:55.018 "enable_recv_pipe": true, 00:12:55.018 "enable_quickack": false, 00:12:55.018 "enable_placement_id": 0, 00:12:55.018 "enable_zerocopy_send_server": true, 00:12:55.018 "enable_zerocopy_send_client": false, 00:12:55.018 "zerocopy_threshold": 0, 00:12:55.018 "tls_version": 0, 00:12:55.018 "enable_ktls": false 00:12:55.018 } 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "method": "sock_impl_set_options", 00:12:55.018 "params": { 00:12:55.018 "impl_name": "posix", 00:12:55.018 "recv_buf_size": 2097152, 00:12:55.018 "send_buf_size": 2097152, 00:12:55.018 "enable_recv_pipe": true, 00:12:55.018 "enable_quickack": false, 00:12:55.018 "enable_placement_id": 0, 00:12:55.018 "enable_zerocopy_send_server": true, 00:12:55.018 "enable_zerocopy_send_client": false, 00:12:55.018 "zerocopy_threshold": 0, 00:12:55.018 "tls_version": 0, 00:12:55.018 "enable_ktls": false 00:12:55.018 } 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "method": "sock_impl_set_options", 00:12:55.018 "params": { 00:12:55.018 "impl_name": "uring", 00:12:55.018 "recv_buf_size": 2097152, 00:12:55.018 "send_buf_size": 2097152, 00:12:55.018 "enable_recv_pipe": true, 00:12:55.018 "enable_quickack": false, 00:12:55.018 "enable_placement_id": 0, 00:12:55.018 "enable_zerocopy_send_server": false, 00:12:55.018 "enable_zerocopy_send_client": false, 00:12:55.018 "zerocopy_threshold": 0, 00:12:55.018 "tls_version": 0, 00:12:55.018 "enable_ktls": false 00:12:55.018 } 00:12:55.018 } 00:12:55.018 ] 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "subsystem": "vmd", 00:12:55.018 "config": [] 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "subsystem": "accel", 00:12:55.018 "config": [ 00:12:55.018 { 00:12:55.018 "method": "accel_set_options", 00:12:55.018 "params": { 00:12:55.018 "small_cache_size": 128, 00:12:55.018 "large_cache_size": 16, 00:12:55.018 "task_count": 2048, 00:12:55.018 "sequence_count": 2048, 00:12:55.018 "buf_count": 2048 00:12:55.018 } 00:12:55.018 } 00:12:55.018 ] 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "subsystem": "bdev", 00:12:55.018 "config": [ 00:12:55.018 { 00:12:55.018 "method": "bdev_set_options", 00:12:55.018 "params": { 00:12:55.018 "bdev_io_pool_size": 65535, 00:12:55.018 "bdev_io_cache_size": 256, 00:12:55.018 "bdev_auto_examine": true, 00:12:55.018 "iobuf_small_cache_size": 128, 00:12:55.018 "iobuf_large_cache_size": 16 00:12:55.018 } 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "method": "bdev_raid_set_options", 00:12:55.018 "params": { 00:12:55.018 "process_window_size_kb": 1024, 00:12:55.018 "process_max_bandwidth_mb_sec": 0 00:12:55.018 } 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "method": "bdev_iscsi_set_options", 00:12:55.018 "params": { 00:12:55.018 "timeout_sec": 30 00:12:55.018 } 00:12:55.018 }, 00:12:55.018 { 00:12:55.018 "method": "bdev_nvme_set_options", 00:12:55.018 "params": { 00:12:55.018 "action_on_timeout": "none", 00:12:55.018 "timeout_us": 0, 00:12:55.018 "timeout_admin_us": 0, 00:12:55.018 "keep_alive_timeout_ms": 10000, 00:12:55.018 "arbitration_burst": 0, 00:12:55.018 "low_priority_weight": 0, 00:12:55.018 "medium_priority_weight": 0, 00:12:55.018 "high_priority_weight": 0, 00:12:55.018 "nvme_adminq_poll_period_us": 10000, 00:12:55.018 "nvme_ioq_poll_period_us": 0, 00:12:55.018 "io_queue_requests": 0, 00:12:55.018 "delay_cmd_submit": true, 00:12:55.018 "transport_retry_count": 4, 00:12:55.018 "bdev_retry_count": 3, 00:12:55.018 "transport_ack_timeout": 0, 00:12:55.018 "ctrlr_loss_timeout_sec": 0, 00:12:55.018 "reconnect_delay_sec": 0, 00:12:55.018 "fast_io_fail_timeout_sec": 0, 00:12:55.018 "disable_auto_failback": false, 00:12:55.018 "generate_uuids": false, 00:12:55.018 "transport_tos": 0, 00:12:55.018 "nvme_error_stat": false, 00:12:55.018 "rdma_srq_size": 0, 00:12:55.018 "io_path_stat": false, 00:12:55.018 "allow_accel_sequence": false, 00:12:55.018 "rdma_max_cq_size": 0, 00:12:55.018 "rdma_cm_event_timeout_ms": 0, 00:12:55.018 "dhchap_digests": [ 00:12:55.018 "sha256", 00:12:55.019 "sha384", 00:12:55.019 "sha512" 00:12:55.019 ], 00:12:55.019 "dhchap_dhgroups": [ 00:12:55.019 "null", 00:12:55.019 "ffdhe2048", 00:12:55.019 "ffdhe3072", 00:12:55.019 "ffdhe4096", 00:12:55.019 "ffdhe6144", 00:12:55.019 "ffdhe8192" 00:12:55.019 ] 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "bdev_nvme_set_hotplug", 00:12:55.019 "params": { 00:12:55.019 "period_us": 100000, 00:12:55.019 "enable": false 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "bdev_malloc_create", 00:12:55.019 "params": { 00:12:55.019 "name": "malloc0", 00:12:55.019 "num_blocks": 8192, 00:12:55.019 "block_size": 4096, 00:12:55.019 "physical_block_size": 4096, 00:12:55.019 "uuid": "d02480fa-c85f-4a94-a9dc-b569cbcc9bb5", 00:12:55.019 "optimal_io_boundary": 0, 00:12:55.019 "md_size": 0, 00:12:55.019 "dif_type": 0, 00:12:55.019 "dif_is_head_of_md": false, 00:12:55.019 "dif_pi_format": 0 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "bdev_wait_for_examine" 00:12:55.019 } 00:12:55.019 ] 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "subsystem": "nbd", 00:12:55.019 "config": [] 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "subsystem": "scheduler", 00:12:55.019 "config": [ 00:12:55.019 { 00:12:55.019 "method": "framework_set_scheduler", 00:12:55.019 "params": { 00:12:55.019 "name": "static" 00:12:55.019 } 00:12:55.019 } 00:12:55.019 ] 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "subsystem": "nvmf", 00:12:55.019 "config": [ 00:12:55.019 { 00:12:55.019 "method": "nvmf_set_config", 00:12:55.019 "params": { 00:12:55.019 "discovery_filter": "match_any", 00:12:55.019 "admin_cmd_passthru": { 00:12:55.019 "identify_ctrlr": false 00:12:55.019 }, 00:12:55.019 "dhchap_digests": [ 00:12:55.019 "sha256", 00:12:55.019 "sha384", 00:12:55.019 "sha512" 00:12:55.019 ], 00:12:55.019 "dhchap_dhgroups": [ 00:12:55.019 "null", 00:12:55.019 "ffdhe2048", 00:12:55.019 "ffdhe3072", 00:12:55.019 "ffdhe4096", 00:12:55.019 "ffdhe6144", 00:12:55.019 "ffdhe8192" 00:12:55.019 ] 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_set_max_subsystems", 00:12:55.019 "params": { 00:12:55.019 "max_subsystems": 1024 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_set_crdt", 00:12:55.019 "params": { 00:12:55.019 "crdt1": 0, 00:12:55.019 "crdt2": 0, 00:12:55.019 "crdt3": 0 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_create_transport", 00:12:55.019 "params": { 00:12:55.019 "trtype": "TCP", 00:12:55.019 "max_queue_depth": 128, 00:12:55.019 "max_io_qpairs_per_ctrlr": 127, 00:12:55.019 "in_capsule_data_size": 4096, 00:12:55.019 "max_io_size": 131072, 00:12:55.019 "io_unit_size": 131072, 00:12:55.019 "max_aq_depth": 128, 00:12:55.019 "num_shared_buffers": 511, 00:12:55.019 "buf_cache_size": 4294967295, 00:12:55.019 "dif_insert_or_strip": false, 00:12:55.019 "zcopy": false, 00:12:55.019 "c2h_success": false, 00:12:55.019 "sock_priority": 0, 00:12:55.019 "abort_timeout_sec": 1, 00:12:55.019 "ack_timeout": 0, 00:12:55.019 "data_wr_pool_size": 0 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_create_subsystem", 00:12:55.019 "params": { 00:12:55.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.019 "allow_any_host": false, 00:12:55.019 "serial_number": "SPDK00000000000001", 00:12:55.019 "model_number": "SPDK bdev Controller", 00:12:55.019 "max_namespaces": 10, 00:12:55.019 "min_cntlid": 1, 00:12:55.019 "max_cntlid": 65519, 00:12:55.019 "ana_reporting": false 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_subsystem_add_host", 00:12:55.019 "params": { 00:12:55.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.019 "host": "nqn.2016-06.io.spdk:host1", 00:12:55.019 "psk": "key0" 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_subsystem_add_ns", 00:12:55.019 "params": { 00:12:55.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.019 "namespace": { 00:12:55.019 "nsid": 1, 00:12:55.019 "bdev_name": "malloc0", 00:12:55.019 "nguid": "D02480FAC85F4A94A9DCB569CBCC9BB5", 00:12:55.019 "uuid": "d02480fa-c85f-4a94-a9dc-b569cbcc9bb5", 00:12:55.019 "no_auto_visible": false 00:12:55.019 } 00:12:55.019 } 00:12:55.019 }, 00:12:55.019 { 00:12:55.019 "method": "nvmf_subsystem_add_listener", 00:12:55.019 "params": { 00:12:55.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.019 "listen_address": { 00:12:55.019 "trtype": "TCP", 00:12:55.019 "adrfam": "IPv4", 00:12:55.019 "traddr": "10.0.0.3", 00:12:55.019 "trsvcid": "4420" 00:12:55.019 }, 00:12:55.019 "secure_channel": true 00:12:55.019 } 00:12:55.019 } 00:12:55.019 ] 00:12:55.019 } 00:12:55.019 ] 00:12:55.019 }' 00:12:55.019 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:55.279 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:12:55.279 "subsystems": [ 00:12:55.279 { 00:12:55.279 "subsystem": "keyring", 00:12:55.279 "config": [ 00:12:55.279 { 00:12:55.279 "method": "keyring_file_add_key", 00:12:55.279 "params": { 00:12:55.279 "name": "key0", 00:12:55.279 "path": "/tmp/tmp.CyTxKSqpAN" 00:12:55.279 } 00:12:55.279 } 00:12:55.279 ] 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "subsystem": "iobuf", 00:12:55.279 "config": [ 00:12:55.279 { 00:12:55.279 "method": "iobuf_set_options", 00:12:55.279 "params": { 00:12:55.279 "small_pool_count": 8192, 00:12:55.279 "large_pool_count": 1024, 00:12:55.279 "small_bufsize": 8192, 00:12:55.279 "large_bufsize": 135168, 00:12:55.279 "enable_numa": false 00:12:55.279 } 00:12:55.279 } 00:12:55.279 ] 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "subsystem": "sock", 00:12:55.279 "config": [ 00:12:55.279 { 00:12:55.279 "method": "sock_set_default_impl", 00:12:55.279 "params": { 00:12:55.279 "impl_name": "uring" 00:12:55.279 } 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "method": "sock_impl_set_options", 00:12:55.279 "params": { 00:12:55.279 "impl_name": "ssl", 00:12:55.279 "recv_buf_size": 4096, 00:12:55.279 "send_buf_size": 4096, 00:12:55.279 "enable_recv_pipe": true, 00:12:55.279 "enable_quickack": false, 00:12:55.279 "enable_placement_id": 0, 00:12:55.279 "enable_zerocopy_send_server": true, 00:12:55.279 "enable_zerocopy_send_client": false, 00:12:55.279 "zerocopy_threshold": 0, 00:12:55.279 "tls_version": 0, 00:12:55.279 "enable_ktls": false 00:12:55.279 } 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "method": "sock_impl_set_options", 00:12:55.279 "params": { 00:12:55.279 "impl_name": "posix", 00:12:55.279 "recv_buf_size": 2097152, 00:12:55.279 "send_buf_size": 2097152, 00:12:55.279 "enable_recv_pipe": true, 00:12:55.279 "enable_quickack": false, 00:12:55.279 "enable_placement_id": 0, 00:12:55.279 "enable_zerocopy_send_server": true, 00:12:55.279 "enable_zerocopy_send_client": false, 00:12:55.279 "zerocopy_threshold": 0, 00:12:55.279 "tls_version": 0, 00:12:55.279 "enable_ktls": false 00:12:55.279 } 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "method": "sock_impl_set_options", 00:12:55.279 "params": { 00:12:55.279 "impl_name": "uring", 00:12:55.279 "recv_buf_size": 2097152, 00:12:55.279 "send_buf_size": 2097152, 00:12:55.279 "enable_recv_pipe": true, 00:12:55.279 "enable_quickack": false, 00:12:55.279 "enable_placement_id": 0, 00:12:55.279 "enable_zerocopy_send_server": false, 00:12:55.279 "enable_zerocopy_send_client": false, 00:12:55.279 "zerocopy_threshold": 0, 00:12:55.279 "tls_version": 0, 00:12:55.279 "enable_ktls": false 00:12:55.279 } 00:12:55.279 } 00:12:55.279 ] 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "subsystem": "vmd", 00:12:55.279 "config": [] 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "subsystem": "accel", 00:12:55.279 "config": [ 00:12:55.279 { 00:12:55.279 "method": "accel_set_options", 00:12:55.279 "params": { 00:12:55.279 "small_cache_size": 128, 00:12:55.279 "large_cache_size": 16, 00:12:55.279 "task_count": 2048, 00:12:55.279 "sequence_count": 2048, 00:12:55.279 "buf_count": 2048 00:12:55.279 } 00:12:55.279 } 00:12:55.279 ] 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "subsystem": "bdev", 00:12:55.279 "config": [ 00:12:55.279 { 00:12:55.279 "method": "bdev_set_options", 00:12:55.279 "params": { 00:12:55.279 "bdev_io_pool_size": 65535, 00:12:55.279 "bdev_io_cache_size": 256, 00:12:55.279 "bdev_auto_examine": true, 00:12:55.279 "iobuf_small_cache_size": 128, 00:12:55.279 "iobuf_large_cache_size": 16 00:12:55.279 } 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "method": "bdev_raid_set_options", 00:12:55.279 "params": { 00:12:55.279 "process_window_size_kb": 1024, 00:12:55.279 "process_max_bandwidth_mb_sec": 0 00:12:55.279 } 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "method": "bdev_iscsi_set_options", 00:12:55.279 "params": { 00:12:55.279 "timeout_sec": 30 00:12:55.279 } 00:12:55.279 }, 00:12:55.279 { 00:12:55.279 "method": "bdev_nvme_set_options", 00:12:55.279 "params": { 00:12:55.279 "action_on_timeout": "none", 00:12:55.279 "timeout_us": 0, 00:12:55.279 "timeout_admin_us": 0, 00:12:55.279 "keep_alive_timeout_ms": 10000, 00:12:55.279 "arbitration_burst": 0, 00:12:55.279 "low_priority_weight": 0, 00:12:55.279 "medium_priority_weight": 0, 00:12:55.279 "high_priority_weight": 0, 00:12:55.279 "nvme_adminq_poll_period_us": 10000, 00:12:55.279 "nvme_ioq_poll_period_us": 0, 00:12:55.279 "io_queue_requests": 512, 00:12:55.279 "delay_cmd_submit": true, 00:12:55.279 "transport_retry_count": 4, 00:12:55.279 "bdev_retry_count": 3, 00:12:55.279 "transport_ack_timeout": 0, 00:12:55.279 "ctrlr_loss_timeout_sec": 0, 00:12:55.279 "reconnect_delay_sec": 0, 00:12:55.279 "fast_io_fail_timeout_sec": 0, 00:12:55.279 "disable_auto_failback": false, 00:12:55.279 "generate_uuids": false, 00:12:55.279 "transport_tos": 0, 00:12:55.279 "nvme_error_stat": false, 00:12:55.279 "rdma_srq_size": 0, 00:12:55.279 "io_path_stat": false, 00:12:55.279 "allow_accel_sequence": false, 00:12:55.279 "rdma_max_cq_size": 0, 00:12:55.279 "rdma_cm_event_timeout_ms": 0, 00:12:55.279 "dhchap_digests": [ 00:12:55.279 "sha256", 00:12:55.280 "sha384", 00:12:55.280 "sha512" 00:12:55.280 ], 00:12:55.280 "dhchap_dhgroups": [ 00:12:55.280 "null", 00:12:55.280 "ffdhe2048", 00:12:55.280 "ffdhe3072", 00:12:55.280 "ffdhe4096", 00:12:55.280 "ffdhe6144", 00:12:55.280 "ffdhe8192" 00:12:55.280 ] 00:12:55.280 } 00:12:55.280 }, 00:12:55.280 { 00:12:55.280 "method": "bdev_nvme_attach_controller", 00:12:55.280 "params": { 00:12:55.280 "name": "TLSTEST", 00:12:55.280 "trtype": "TCP", 00:12:55.280 "adrfam": "IPv4", 00:12:55.280 "traddr": "10.0.0.3", 00:12:55.280 "trsvcid": "4420", 00:12:55.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.280 "prchk_reftag": false, 00:12:55.280 "prchk_guard": false, 00:12:55.280 "ctrlr_loss_timeout_sec": 0, 00:12:55.280 "reconnect_delay_sec": 0, 00:12:55.280 "fast_io_fail_timeout_sec": 0, 00:12:55.280 "psk": "key0", 00:12:55.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:55.280 "hdgst": false, 00:12:55.280 "ddgst": false, 00:12:55.280 "multipath": "multipath" 00:12:55.280 } 00:12:55.280 }, 00:12:55.280 { 00:12:55.280 "method": "bdev_nvme_set_hotplug", 00:12:55.280 "params": { 00:12:55.280 "period_us": 100000, 00:12:55.280 "enable": false 00:12:55.280 } 00:12:55.280 }, 00:12:55.280 { 00:12:55.280 "method": "bdev_wait_for_examine" 00:12:55.280 } 00:12:55.280 ] 00:12:55.280 }, 00:12:55.280 { 00:12:55.280 "subsystem": "nbd", 00:12:55.280 "config": [] 00:12:55.280 } 00:12:55.280 ] 00:12:55.280 }' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71461 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71461 ']' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71461 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71461 00:12:55.280 killing process with pid 71461 00:12:55.280 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.280 00:12:55.280 Latency(us) 00:12:55.280 [2024-12-06T12:20:41.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.280 [2024-12-06T12:20:41.938Z] =================================================================================================================== 00:12:55.280 [2024-12-06T12:20:41.938Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71461' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71461 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71461 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71405 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71405 ']' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71405 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71405 00:12:55.280 killing process with pid 71405 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71405' 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71405 00:12:55.280 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71405 00:12:55.540 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:55.540 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.540 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.540 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.541 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:12:55.541 "subsystems": [ 00:12:55.541 { 00:12:55.541 "subsystem": "keyring", 00:12:55.541 "config": [ 00:12:55.541 { 00:12:55.541 "method": "keyring_file_add_key", 00:12:55.541 "params": { 00:12:55.541 "name": "key0", 00:12:55.541 "path": "/tmp/tmp.CyTxKSqpAN" 00:12:55.541 } 00:12:55.541 } 00:12:55.541 ] 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "subsystem": "iobuf", 00:12:55.541 "config": [ 00:12:55.541 { 00:12:55.541 "method": "iobuf_set_options", 00:12:55.541 "params": { 00:12:55.541 "small_pool_count": 8192, 00:12:55.541 "large_pool_count": 1024, 00:12:55.541 "small_bufsize": 8192, 00:12:55.541 "large_bufsize": 135168, 00:12:55.541 "enable_numa": false 00:12:55.541 } 00:12:55.541 } 00:12:55.541 ] 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "subsystem": "sock", 00:12:55.541 "config": [ 00:12:55.541 { 00:12:55.541 "method": "sock_set_default_impl", 00:12:55.541 "params": { 00:12:55.541 "impl_name": "uring" 00:12:55.541 } 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "method": "sock_impl_set_options", 00:12:55.541 "params": { 00:12:55.541 "impl_name": "ssl", 00:12:55.541 "recv_buf_size": 4096, 00:12:55.541 "send_buf_size": 4096, 00:12:55.541 "enable_recv_pipe": true, 00:12:55.541 "enable_quickack": false, 00:12:55.541 "enable_placement_id": 0, 00:12:55.541 "enable_zerocopy_send_server": true, 00:12:55.541 "enable_zerocopy_send_client": false, 00:12:55.541 "zerocopy_threshold": 0, 00:12:55.541 "tls_version": 0, 00:12:55.541 "enable_ktls": false 00:12:55.541 } 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "method": "sock_impl_set_options", 00:12:55.541 "params": { 00:12:55.541 "impl_name": "posix", 00:12:55.541 "recv_buf_size": 2097152, 00:12:55.541 "send_buf_size": 2097152, 00:12:55.541 "enable_recv_pipe": true, 00:12:55.541 "enable_quickack": false, 00:12:55.541 "enable_placement_id": 0, 00:12:55.541 "enable_zerocopy_send_server": true, 00:12:55.541 "enable_zerocopy_send_client": false, 00:12:55.541 "zerocopy_threshold": 0, 00:12:55.541 "tls_version": 0, 00:12:55.541 "enable_ktls": false 00:12:55.541 } 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "method": "sock_impl_set_options", 00:12:55.541 "params": { 00:12:55.541 "impl_name": "uring", 00:12:55.541 "recv_buf_size": 2097152, 00:12:55.541 "send_buf_size": 2097152, 00:12:55.541 "enable_recv_pipe": true, 00:12:55.541 "enable_quickack": false, 00:12:55.541 "enable_placement_id": 0, 00:12:55.541 "enable_zerocopy_send_server": false, 00:12:55.541 "enable_zerocopy_send_client": false, 00:12:55.541 "zerocopy_threshold": 0, 00:12:55.541 "tls_version": 0, 00:12:55.541 "enable_ktls": false 00:12:55.541 } 00:12:55.541 } 00:12:55.541 ] 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "subsystem": "vmd", 00:12:55.541 "config": [] 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "subsystem": "accel", 00:12:55.541 "config": [ 00:12:55.541 { 00:12:55.541 "method": "accel_set_options", 00:12:55.541 "params": { 00:12:55.541 "small_cache_size": 128, 00:12:55.541 "large_cache_size": 16, 00:12:55.541 "task_count": 2048, 00:12:55.541 "sequence_count": 2048, 00:12:55.541 "buf_count": 2048 00:12:55.541 } 00:12:55.541 } 00:12:55.541 ] 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "subsystem": "bdev", 00:12:55.541 "config": [ 00:12:55.541 { 00:12:55.541 "method": "bdev_set_options", 00:12:55.541 "params": { 00:12:55.541 "bdev_io_pool_size": 65535, 00:12:55.541 "bdev_io_cache_size": 256, 00:12:55.541 "bdev_auto_examine": true, 00:12:55.541 "iobuf_small_cache_size": 128, 00:12:55.541 "iobuf_large_cache_size": 16 00:12:55.541 } 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "method": "bdev_raid_set_options", 00:12:55.541 "params": { 00:12:55.541 "process_window_size_kb": 1024, 00:12:55.541 "process_max_bandwidth_mb_sec": 0 00:12:55.541 } 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "method": "bdev_iscsi_set_options", 00:12:55.541 "params": { 00:12:55.541 "timeout_sec": 30 00:12:55.541 } 00:12:55.541 }, 00:12:55.541 { 00:12:55.541 "method": "bdev_nvme_set_options", 00:12:55.541 "params": { 00:12:55.541 "action_on_timeout": "none", 00:12:55.541 "timeout_us": 0, 00:12:55.541 "timeout_admin_us": 0, 00:12:55.541 "keep_alive_timeout_ms": 10000, 00:12:55.541 "arbitration_burst": 0, 00:12:55.541 "low_priority_weight": 0, 00:12:55.541 "medium_priority_weight": 0, 00:12:55.541 "high_priority_weight": 0, 00:12:55.541 "nvme_adminq_poll_period_us": 10000, 00:12:55.541 "nvme_ioq_poll_period_us": 0, 00:12:55.541 "io_queue_requests": 0, 00:12:55.541 "delay_cmd_submit": true, 00:12:55.541 "transport_retry_count": 4, 00:12:55.541 "bdev_retry_count": 3, 00:12:55.541 "transport_ack_timeout": 0, 00:12:55.541 "ctrlr_loss_timeout_sec": 0, 00:12:55.541 "reconnect_delay_sec": 0, 00:12:55.541 "fast_io_fail_timeout_sec": 0, 00:12:55.541 "disable_auto_failback": false, 00:12:55.541 "generate_uuids": false, 00:12:55.541 "transport_tos": 0, 00:12:55.541 "nvme_error_stat": false, 00:12:55.541 "rdma_srq_size": 0, 00:12:55.541 "io_path_stat": false, 00:12:55.541 "allow_accel_sequence": false, 00:12:55.542 "rdma_max_cq_size": 0, 00:12:55.542 "rdma_cm_event_timeout_ms": 0, 00:12:55.542 "dhchap_digests": [ 00:12:55.542 "sha256", 00:12:55.542 "sha384", 00:12:55.542 "sha512" 00:12:55.542 ], 00:12:55.542 "dhchap_dhgroups": [ 00:12:55.542 "null", 00:12:55.542 "ffdhe2048", 00:12:55.542 "ffdhe3072", 00:12:55.542 "ffdhe4096", 00:12:55.542 "ffdhe6144", 00:12:55.542 "ffdhe8192" 00:12:55.542 ] 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "bdev_nvme_set_hotplug", 00:12:55.542 "params": { 00:12:55.542 "period_us": 100000, 00:12:55.542 "enable": false 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "bdev_malloc_create", 00:12:55.542 "params": { 00:12:55.542 "name": "malloc0", 00:12:55.542 "num_blocks": 8192, 00:12:55.542 "block_size": 4096, 00:12:55.542 "physical_block_size": 4096, 00:12:55.542 "uuid": "d02480fa-c85f-4a94-a9dc-b569cbcc9bb5", 00:12:55.542 "optimal_io_boundary": 0, 00:12:55.542 "md_size": 0, 00:12:55.542 "dif_type": 0, 00:12:55.542 "dif_is_head_of_md": false, 00:12:55.542 "dif_pi_format": 0 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "bdev_wait_for_examine" 00:12:55.542 } 00:12:55.542 ] 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "subsystem": "nbd", 00:12:55.542 "config": [] 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "subsystem": "scheduler", 00:12:55.542 "config": [ 00:12:55.542 { 00:12:55.542 "method": "framework_set_scheduler", 00:12:55.542 "params": { 00:12:55.542 "name": "static" 00:12:55.542 } 00:12:55.542 } 00:12:55.542 ] 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "subsystem": "nvmf", 00:12:55.542 "config": [ 00:12:55.542 { 00:12:55.542 "method": "nvmf_set_config", 00:12:55.542 "params": { 00:12:55.542 "discovery_filter": "match_any", 00:12:55.542 "admin_cmd_passthru": { 00:12:55.542 "identify_ctrlr": false 00:12:55.542 }, 00:12:55.542 "dhchap_digests": [ 00:12:55.542 "sha256", 00:12:55.542 "sha384", 00:12:55.542 "sha512" 00:12:55.542 ], 00:12:55.542 "dhchap_dhgroups": [ 00:12:55.542 "null", 00:12:55.542 "ffdhe2048", 00:12:55.542 "ffdhe3072", 00:12:55.542 "ffdhe4096", 00:12:55.542 "ffdhe6144", 00:12:55.542 "ffdhe8192" 00:12:55.542 ] 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_set_max_subsystems", 00:12:55.542 "params": { 00:12:55.542 "max_subsystems": 1024 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_set_crdt", 00:12:55.542 "params": { 00:12:55.542 "crdt1": 0, 00:12:55.542 "crdt2": 0, 00:12:55.542 "crdt3": 0 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_create_transport", 00:12:55.542 "params": { 00:12:55.542 "trtype": "TCP", 00:12:55.542 "max_queue_depth": 128, 00:12:55.542 "max_io_qpairs_per_ctrlr": 127, 00:12:55.542 "in_capsule_data_size": 4096, 00:12:55.542 "max_io_size": 131072, 00:12:55.542 "io_unit_size": 131072, 00:12:55.542 "max_aq_depth": 128, 00:12:55.542 "num_shared_buffers": 511, 00:12:55.542 "buf_cache_size": 4294967295, 00:12:55.542 "dif_insert_or_strip": false, 00:12:55.542 "zcopy": false, 00:12:55.542 "c2h_success": false, 00:12:55.542 "sock_priority": 0, 00:12:55.542 "abort_timeout_sec": 1, 00:12:55.542 "ack_timeout": 0, 00:12:55.542 "data_wr_pool_size": 0 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_create_subsystem", 00:12:55.542 "params": { 00:12:55.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.542 "allow_any_host": false, 00:12:55.542 "serial_number": "SPDK00000000000001", 00:12:55.542 "model_number": "SPDK bdev Controller", 00:12:55.542 "max_namespaces": 10, 00:12:55.542 "min_cntlid": 1, 00:12:55.542 "max_cntlid": 65519, 00:12:55.542 "ana_reporting": false 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_subsystem_add_host", 00:12:55.542 "params": { 00:12:55.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.542 "host": "nqn.2016-06.io.spdk:host1", 00:12:55.542 "psk": "key0" 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_subsystem_add_ns", 00:12:55.542 "params": { 00:12:55.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.542 "namespace": { 00:12:55.542 "nsid": 1, 00:12:55.542 "bdev_name": "malloc0", 00:12:55.542 "nguid": "D02480FAC85F4A94A9DCB569CBCC9BB5", 00:12:55.542 "uuid": "d02480fa-c85f-4a94-a9dc-b569cbcc9bb5", 00:12:55.542 "no_auto_visible": false 00:12:55.542 } 00:12:55.542 } 00:12:55.542 }, 00:12:55.542 { 00:12:55.542 "method": "nvmf_subsystem_add_listener", 00:12:55.542 "params": { 00:12:55.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.542 "listen_address": { 00:12:55.542 "trtype": "TCP", 00:12:55.542 "adrfam": "IPv4", 00:12:55.542 "traddr": "10.0.0.3", 00:12:55.542 "trsvcid": "4420" 00:12:55.542 }, 00:12:55.542 "secure_channel": true 00:12:55.542 } 00:12:55.542 } 00:12:55.542 ] 00:12:55.542 } 00:12:55.542 ] 00:12:55.542 }' 00:12:55.542 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71505 00:12:55.542 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:55.542 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71505 00:12:55.542 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71505 ']' 00:12:55.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.543 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.543 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.543 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.543 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.543 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.543 [2024-12-06 12:20:42.119655] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:55.543 [2024-12-06 12:20:42.119747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.802 [2024-12-06 12:20:42.265615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.802 [2024-12-06 12:20:42.293271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.802 [2024-12-06 12:20:42.293324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.802 [2024-12-06 12:20:42.293358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.802 [2024-12-06 12:20:42.293366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.802 [2024-12-06 12:20:42.293373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.802 [2024-12-06 12:20:42.293729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.802 [2024-12-06 12:20:42.434618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.061 [2024-12-06 12:20:42.490877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.061 [2024-12-06 12:20:42.522871] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:56.061 [2024-12-06 12:20:42.523090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71537 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71537 /var/tmp/bdevperf.sock 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71537 ']' 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:56.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.629 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:12:56.629 "subsystems": [ 00:12:56.629 { 00:12:56.629 "subsystem": "keyring", 00:12:56.629 "config": [ 00:12:56.629 { 00:12:56.629 "method": "keyring_file_add_key", 00:12:56.629 "params": { 00:12:56.629 "name": "key0", 00:12:56.629 "path": "/tmp/tmp.CyTxKSqpAN" 00:12:56.629 } 00:12:56.629 } 00:12:56.629 ] 00:12:56.629 }, 00:12:56.629 { 00:12:56.629 "subsystem": "iobuf", 00:12:56.629 "config": [ 00:12:56.629 { 00:12:56.629 "method": "iobuf_set_options", 00:12:56.629 "params": { 00:12:56.629 "small_pool_count": 8192, 00:12:56.629 "large_pool_count": 1024, 00:12:56.629 "small_bufsize": 8192, 00:12:56.629 "large_bufsize": 135168, 00:12:56.629 "enable_numa": false 00:12:56.629 } 00:12:56.629 } 00:12:56.629 ] 00:12:56.629 }, 00:12:56.629 { 00:12:56.629 "subsystem": "sock", 00:12:56.629 "config": [ 00:12:56.629 { 00:12:56.629 "method": "sock_set_default_impl", 00:12:56.629 "params": { 00:12:56.629 "impl_name": "uring" 00:12:56.629 } 00:12:56.629 }, 00:12:56.629 { 00:12:56.629 "method": "sock_impl_set_options", 00:12:56.629 "params": { 00:12:56.629 "impl_name": "ssl", 00:12:56.629 "recv_buf_size": 4096, 00:12:56.629 "send_buf_size": 4096, 00:12:56.629 "enable_recv_pipe": true, 00:12:56.629 "enable_quickack": false, 00:12:56.629 "enable_placement_id": 0, 00:12:56.629 "enable_zerocopy_send_server": true, 00:12:56.629 "enable_zerocopy_send_client": false, 00:12:56.629 "zerocopy_threshold": 0, 00:12:56.629 "tls_version": 0, 00:12:56.629 "enable_ktls": false 00:12:56.629 } 00:12:56.629 }, 00:12:56.629 { 00:12:56.629 "method": "sock_impl_set_options", 00:12:56.629 "params": { 00:12:56.629 "impl_name": "posix", 00:12:56.629 "recv_buf_size": 2097152, 00:12:56.629 "send_buf_size": 2097152, 00:12:56.629 "enable_recv_pipe": true, 00:12:56.629 "enable_quickack": false, 00:12:56.629 "enable_placement_id": 0, 00:12:56.629 "enable_zerocopy_send_server": true, 00:12:56.629 "enable_zerocopy_send_client": false, 00:12:56.629 "zerocopy_threshold": 0, 00:12:56.629 "tls_version": 0, 00:12:56.629 "enable_ktls": false 00:12:56.629 } 00:12:56.629 }, 00:12:56.629 { 00:12:56.629 "method": "sock_impl_set_options", 00:12:56.629 "params": { 00:12:56.629 "impl_name": "uring", 00:12:56.629 "recv_buf_size": 2097152, 00:12:56.630 "send_buf_size": 2097152, 00:12:56.630 "enable_recv_pipe": true, 00:12:56.630 "enable_quickack": false, 00:12:56.630 "enable_placement_id": 0, 00:12:56.630 "enable_zerocopy_send_server": false, 00:12:56.630 "enable_zerocopy_send_client": false, 00:12:56.630 "zerocopy_threshold": 0, 00:12:56.630 "tls_version": 0, 00:12:56.630 "enable_ktls": false 00:12:56.630 } 00:12:56.630 } 00:12:56.630 ] 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "subsystem": "vmd", 00:12:56.630 "config": [] 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "subsystem": "accel", 00:12:56.630 "config": [ 00:12:56.630 { 00:12:56.630 "method": "accel_set_options", 00:12:56.630 "params": { 00:12:56.630 "small_cache_size": 128, 00:12:56.630 "large_cache_size": 16, 00:12:56.630 "task_count": 2048, 00:12:56.630 "sequence_count": 2048, 00:12:56.630 "buf_count": 2048 00:12:56.630 } 00:12:56.630 } 00:12:56.630 ] 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "subsystem": "bdev", 00:12:56.630 "config": [ 00:12:56.630 { 00:12:56.630 "method": "bdev_set_options", 00:12:56.630 "params": { 00:12:56.630 "bdev_io_pool_size": 65535, 00:12:56.630 "bdev_io_cache_size": 256, 00:12:56.630 "bdev_auto_examine": true, 00:12:56.630 "iobuf_small_cache_size": 128, 00:12:56.630 "iobuf_large_cache_size": 16 00:12:56.630 } 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "method": "bdev_raid_set_options", 00:12:56.630 "params": { 00:12:56.630 "process_window_size_kb": 1024, 00:12:56.630 "process_max_bandwidth_mb_sec": 0 00:12:56.630 } 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "method": "bdev_iscsi_set_options", 00:12:56.630 "params": { 00:12:56.630 "timeout_sec": 30 00:12:56.630 } 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "method": "bdev_nvme_set_options", 00:12:56.630 "params": { 00:12:56.630 "action_on_timeout": "none", 00:12:56.630 "timeout_us": 0, 00:12:56.630 "timeout_admin_us": 0, 00:12:56.630 "keep_alive_timeout_ms": 10000, 00:12:56.630 "arbitration_burst": 0, 00:12:56.630 "low_priority_weight": 0, 00:12:56.630 "medium_priority_weight": 0, 00:12:56.630 "high_priority_weight": 0, 00:12:56.630 "nvme_adminq_poll_period_us": 10000, 00:12:56.630 "nvme_ioq_poll_period_us": 0, 00:12:56.630 "io_queue_requests": 512, 00:12:56.630 "delay_cmd_submit": true, 00:12:56.630 "transport_retry_count": 4, 00:12:56.630 "bdev_retry_count": 3, 00:12:56.630 "transport_ack_timeout": 0, 00:12:56.630 "ctrlr_loss_timeout_sec": 0, 00:12:56.630 "reconnect_delay_sec": 0, 00:12:56.630 "fast_io_fail_timeout_sec": 0, 00:12:56.630 "disable_auto_failback": false, 00:12:56.630 "generate_uuids": false, 00:12:56.630 "transport_tos": 0, 00:12:56.630 "nvme_error_stat": false, 00:12:56.630 "rdma_srq_size": 0, 00:12:56.630 "io_path_stat": false, 00:12:56.630 "allow_accel_sequence": false, 00:12:56.630 "rdma_max_cq_size": 0, 00:12:56.630 "rdma_cm_event_timeout_ms": 0, 00:12:56.630 "dhchap_digests": [ 00:12:56.630 "sha256", 00:12:56.630 "sha384", 00:12:56.630 "sha512" 00:12:56.630 ], 00:12:56.630 "dhchap_dhgroups": [ 00:12:56.630 "null", 00:12:56.630 "ffdhe2048", 00:12:56.630 "ffdhe3072", 00:12:56.630 "ffdhe4096", 00:12:56.630 "ffdhe6144", 00:12:56.630 "ffdhe8192" 00:12:56.630 ] 00:12:56.630 } 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "method": "bdev_nvme_attach_controller", 00:12:56.630 "params": { 00:12:56.630 "name": "TLSTEST", 00:12:56.630 "trtype": "TCP", 00:12:56.630 "adrfam": "IPv4", 00:12:56.630 "traddr": "10.0.0.3", 00:12:56.630 "trsvcid": "4420", 00:12:56.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.630 "prchk_reftag": false, 00:12:56.630 "prchk_guard": false, 00:12:56.630 "ctrlr_loss_timeout_sec": 0, 00:12:56.630 "reconnect_delay_sec": 0, 00:12:56.630 "fast_io_fail_timeout_sec": 0, 00:12:56.630 "psk": "key0", 00:12:56.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.630 "hdgst": false, 00:12:56.630 "ddgst": false, 00:12:56.630 "multipath": "multipath" 00:12:56.630 } 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "method": "bdev_nvme_set_hotplug", 00:12:56.630 "params": { 00:12:56.630 "period_us": 100000, 00:12:56.630 "enable": false 00:12:56.630 } 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "method": "bdev_wait_for_examine" 00:12:56.630 } 00:12:56.630 ] 00:12:56.630 }, 00:12:56.630 { 00:12:56.630 "subsystem": "nbd", 00:12:56.630 "config": [] 00:12:56.630 } 00:12:56.630 ] 00:12:56.630 }' 00:12:56.630 [2024-12-06 12:20:43.219287] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:56.630 [2024-12-06 12:20:43.220046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71537 ] 00:12:56.890 [2024-12-06 12:20:43.363451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.890 [2024-12-06 12:20:43.402304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.890 [2024-12-06 12:20:43.517576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:57.149 [2024-12-06 12:20:43.553762] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:57.716 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.716 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:57.716 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:57.717 Running I/O for 10 seconds... 00:13:00.035 4737.00 IOPS, 18.50 MiB/s [2024-12-06T12:20:47.626Z] 4828.50 IOPS, 18.86 MiB/s [2024-12-06T12:20:48.563Z] 4853.33 IOPS, 18.96 MiB/s [2024-12-06T12:20:49.500Z] 4867.50 IOPS, 19.01 MiB/s [2024-12-06T12:20:50.435Z] 4869.40 IOPS, 19.02 MiB/s [2024-12-06T12:20:51.371Z] 4868.00 IOPS, 19.02 MiB/s [2024-12-06T12:20:52.307Z] 4880.86 IOPS, 19.07 MiB/s [2024-12-06T12:20:53.683Z] 4885.25 IOPS, 19.08 MiB/s [2024-12-06T12:20:54.620Z] 4884.11 IOPS, 19.08 MiB/s [2024-12-06T12:20:54.620Z] 4882.70 IOPS, 19.07 MiB/s 00:13:07.962 Latency(us) 00:13:07.962 [2024-12-06T12:20:54.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.962 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:07.962 Verification LBA range: start 0x0 length 0x2000 00:13:07.962 TLSTESTn1 : 10.01 4888.55 19.10 0.00 0.00 26138.80 5183.30 20018.27 00:13:07.962 [2024-12-06T12:20:54.620Z] =================================================================================================================== 00:13:07.962 [2024-12-06T12:20:54.620Z] Total : 4888.55 19.10 0.00 0.00 26138.80 5183.30 20018.27 00:13:07.962 { 00:13:07.962 "results": [ 00:13:07.962 { 00:13:07.962 "job": "TLSTESTn1", 00:13:07.962 "core_mask": "0x4", 00:13:07.962 "workload": "verify", 00:13:07.962 "status": "finished", 00:13:07.962 "verify_range": { 00:13:07.962 "start": 0, 00:13:07.962 "length": 8192 00:13:07.962 }, 00:13:07.962 "queue_depth": 128, 00:13:07.962 "io_size": 4096, 00:13:07.962 "runtime": 10.014011, 00:13:07.962 "iops": 4888.550651681929, 00:13:07.962 "mibps": 19.095900983132534, 00:13:07.962 "io_failed": 0, 00:13:07.962 "io_timeout": 0, 00:13:07.962 "avg_latency_us": 26138.80426404008, 00:13:07.962 "min_latency_us": 5183.301818181818, 00:13:07.962 "max_latency_us": 20018.269090909092 00:13:07.962 } 00:13:07.962 ], 00:13:07.962 "core_count": 1 00:13:07.962 } 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71537 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71537 ']' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71537 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71537 00:13:07.962 killing process with pid 71537 00:13:07.962 Received shutdown signal, test time was about 10.000000 seconds 00:13:07.962 00:13:07.962 Latency(us) 00:13:07.962 [2024-12-06T12:20:54.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.962 [2024-12-06T12:20:54.620Z] =================================================================================================================== 00:13:07.962 [2024-12-06T12:20:54.620Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71537' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71537 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71537 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71505 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71505 ']' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71505 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71505 00:13:07.962 killing process with pid 71505 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71505' 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71505 00:13:07.962 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71505 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71671 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71671 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71671 ']' 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.221 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.221 [2024-12-06 12:20:54.707791] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:08.222 [2024-12-06 12:20:54.708041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.222 [2024-12-06 12:20:54.853227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.481 [2024-12-06 12:20:54.892088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.481 [2024-12-06 12:20:54.892151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.481 [2024-12-06 12:20:54.892204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.481 [2024-12-06 12:20:54.892217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.481 [2024-12-06 12:20:54.892226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.481 [2024-12-06 12:20:54.892631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.481 [2024-12-06 12:20:54.926291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:08.481 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.481 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:08.481 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.481 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.481 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.481 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.CyTxKSqpAN 00:13:08.481 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CyTxKSqpAN 00:13:08.481 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:08.740 [2024-12-06 12:20:55.265043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.740 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:09.008 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:09.326 [2024-12-06 12:20:55.757139] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:09.326 [2024-12-06 12:20:55.757373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:09.326 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:09.609 malloc0 00:13:09.609 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:09.873 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:13:09.873 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=71719 00:13:10.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 71719 /var/tmp/bdevperf.sock 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71719 ']' 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.131 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:10.131 [2024-12-06 12:20:56.738937] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:10.131 [2024-12-06 12:20:56.739767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71719 ] 00:13:10.390 [2024-12-06 12:20:56.887419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.390 [2024-12-06 12:20:56.927687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.390 [2024-12-06 12:20:56.962932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.326 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.326 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:11.326 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:13:11.326 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:11.585 [2024-12-06 12:20:58.150802] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:11.585 nvme0n1 00:13:11.585 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:11.843 Running I/O for 1 seconds... 00:13:12.777 4837.00 IOPS, 18.89 MiB/s 00:13:12.777 Latency(us) 00:13:12.777 [2024-12-06T12:20:59.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.777 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:12.777 Verification LBA range: start 0x0 length 0x2000 00:13:12.777 nvme0n1 : 1.01 4902.29 19.15 0.00 0.00 25925.71 4021.53 19422.49 00:13:12.777 [2024-12-06T12:20:59.435Z] =================================================================================================================== 00:13:12.777 [2024-12-06T12:20:59.435Z] Total : 4902.29 19.15 0.00 0.00 25925.71 4021.53 19422.49 00:13:12.777 { 00:13:12.777 "results": [ 00:13:12.777 { 00:13:12.777 "job": "nvme0n1", 00:13:12.777 "core_mask": "0x2", 00:13:12.777 "workload": "verify", 00:13:12.777 "status": "finished", 00:13:12.777 "verify_range": { 00:13:12.777 "start": 0, 00:13:12.777 "length": 8192 00:13:12.777 }, 00:13:12.777 "queue_depth": 128, 00:13:12.777 "io_size": 4096, 00:13:12.777 "runtime": 1.012793, 00:13:12.777 "iops": 4902.2850671361275, 00:13:12.777 "mibps": 19.149551043500498, 00:13:12.777 "io_failed": 0, 00:13:12.777 "io_timeout": 0, 00:13:12.777 "avg_latency_us": 25925.706810216976, 00:13:12.777 "min_latency_us": 4021.5272727272727, 00:13:12.777 "max_latency_us": 19422.487272727274 00:13:12.777 } 00:13:12.777 ], 00:13:12.777 "core_count": 1 00:13:12.777 } 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 71719 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71719 ']' 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71719 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71719 00:13:12.777 killing process with pid 71719 00:13:12.777 Received shutdown signal, test time was about 1.000000 seconds 00:13:12.777 00:13:12.777 Latency(us) 00:13:12.777 [2024-12-06T12:20:59.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.777 [2024-12-06T12:20:59.435Z] =================================================================================================================== 00:13:12.777 [2024-12-06T12:20:59.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71719' 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71719 00:13:12.777 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71719 00:13:13.035 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71671 00:13:13.035 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71671 ']' 00:13:13.035 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71671 00:13:13.035 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:13.035 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.036 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71671 00:13:13.036 killing process with pid 71671 00:13:13.036 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.036 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.036 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71671' 00:13:13.036 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71671 00:13:13.036 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71671 00:13:13.293 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:13:13.293 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71770 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71770 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71770 ']' 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.294 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 [2024-12-06 12:20:59.783365] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:13.294 [2024-12-06 12:20:59.784384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.294 [2024-12-06 12:20:59.932727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.552 [2024-12-06 12:20:59.961906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.552 [2024-12-06 12:20:59.961956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.552 [2024-12-06 12:20:59.961967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.552 [2024-12-06 12:20:59.961973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.552 [2024-12-06 12:20:59.961978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.552 [2024-12-06 12:20:59.962282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.552 [2024-12-06 12:20:59.989440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:14.117 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.117 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.118 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.118 [2024-12-06 12:21:00.728741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.118 malloc0 00:13:14.118 [2024-12-06 12:21:00.754907] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:14.118 [2024-12-06 12:21:00.755097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:14.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=71802 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 71802 /var/tmp/bdevperf.sock 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71802 ']' 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.375 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.375 [2024-12-06 12:21:00.841388] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:14.375 [2024-12-06 12:21:00.841675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71802 ] 00:13:14.375 [2024-12-06 12:21:00.988234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.375 [2024-12-06 12:21:01.017868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.632 [2024-12-06 12:21:01.047900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:14.633 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.633 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:14.633 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CyTxKSqpAN 00:13:14.890 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:15.148 [2024-12-06 12:21:01.587289] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:15.148 nvme0n1 00:13:15.148 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:15.148 Running I/O for 1 seconds... 00:13:16.522 4480.00 IOPS, 17.50 MiB/s 00:13:16.522 Latency(us) 00:13:16.522 [2024-12-06T12:21:03.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.522 Verification LBA range: start 0x0 length 0x2000 00:13:16.522 nvme0n1 : 1.02 4504.80 17.60 0.00 0.00 28134.92 6464.23 18230.92 00:13:16.522 [2024-12-06T12:21:03.180Z] =================================================================================================================== 00:13:16.522 [2024-12-06T12:21:03.180Z] Total : 4504.80 17.60 0.00 0.00 28134.92 6464.23 18230.92 00:13:16.522 { 00:13:16.522 "results": [ 00:13:16.522 { 00:13:16.522 "job": "nvme0n1", 00:13:16.522 "core_mask": "0x2", 00:13:16.522 "workload": "verify", 00:13:16.522 "status": "finished", 00:13:16.522 "verify_range": { 00:13:16.522 "start": 0, 00:13:16.522 "length": 8192 00:13:16.522 }, 00:13:16.522 "queue_depth": 128, 00:13:16.522 "io_size": 4096, 00:13:16.522 "runtime": 1.022908, 00:13:16.522 "iops": 4504.8039510884655, 00:13:16.522 "mibps": 17.59689043393932, 00:13:16.522 "io_failed": 0, 00:13:16.522 "io_timeout": 0, 00:13:16.522 "avg_latency_us": 28134.920404040404, 00:13:16.522 "min_latency_us": 6464.232727272727, 00:13:16.522 "max_latency_us": 18230.923636363637 00:13:16.522 } 00:13:16.522 ], 00:13:16.522 "core_count": 1 00:13:16.522 } 00:13:16.522 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:13:16.522 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.522 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.522 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.522 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:13:16.522 "subsystems": [ 00:13:16.522 { 00:13:16.522 "subsystem": "keyring", 00:13:16.522 "config": [ 00:13:16.522 { 00:13:16.522 "method": "keyring_file_add_key", 00:13:16.522 "params": { 00:13:16.522 "name": "key0", 00:13:16.522 "path": "/tmp/tmp.CyTxKSqpAN" 00:13:16.522 } 00:13:16.522 } 00:13:16.522 ] 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "subsystem": "iobuf", 00:13:16.522 "config": [ 00:13:16.522 { 00:13:16.522 "method": "iobuf_set_options", 00:13:16.522 "params": { 00:13:16.522 "small_pool_count": 8192, 00:13:16.522 "large_pool_count": 1024, 00:13:16.522 "small_bufsize": 8192, 00:13:16.522 "large_bufsize": 135168, 00:13:16.522 "enable_numa": false 00:13:16.522 } 00:13:16.522 } 00:13:16.522 ] 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "subsystem": "sock", 00:13:16.522 "config": [ 00:13:16.522 { 00:13:16.522 "method": "sock_set_default_impl", 00:13:16.522 "params": { 00:13:16.522 "impl_name": "uring" 00:13:16.522 } 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "method": "sock_impl_set_options", 00:13:16.522 "params": { 00:13:16.522 "impl_name": "ssl", 00:13:16.522 "recv_buf_size": 4096, 00:13:16.522 "send_buf_size": 4096, 00:13:16.522 "enable_recv_pipe": true, 00:13:16.522 "enable_quickack": false, 00:13:16.522 "enable_placement_id": 0, 00:13:16.522 "enable_zerocopy_send_server": true, 00:13:16.522 "enable_zerocopy_send_client": false, 00:13:16.522 "zerocopy_threshold": 0, 00:13:16.522 "tls_version": 0, 00:13:16.522 "enable_ktls": false 00:13:16.522 } 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "method": "sock_impl_set_options", 00:13:16.522 "params": { 00:13:16.522 "impl_name": "posix", 00:13:16.522 "recv_buf_size": 2097152, 00:13:16.522 "send_buf_size": 2097152, 00:13:16.522 "enable_recv_pipe": true, 00:13:16.522 "enable_quickack": false, 00:13:16.522 "enable_placement_id": 0, 00:13:16.522 "enable_zerocopy_send_server": true, 00:13:16.522 "enable_zerocopy_send_client": false, 00:13:16.522 "zerocopy_threshold": 0, 00:13:16.522 "tls_version": 0, 00:13:16.522 "enable_ktls": false 00:13:16.522 } 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "method": "sock_impl_set_options", 00:13:16.522 "params": { 00:13:16.522 "impl_name": "uring", 00:13:16.522 "recv_buf_size": 2097152, 00:13:16.522 "send_buf_size": 2097152, 00:13:16.522 "enable_recv_pipe": true, 00:13:16.522 "enable_quickack": false, 00:13:16.522 "enable_placement_id": 0, 00:13:16.522 "enable_zerocopy_send_server": false, 00:13:16.522 "enable_zerocopy_send_client": false, 00:13:16.522 "zerocopy_threshold": 0, 00:13:16.522 "tls_version": 0, 00:13:16.522 "enable_ktls": false 00:13:16.522 } 00:13:16.522 } 00:13:16.522 ] 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "subsystem": "vmd", 00:13:16.522 "config": [] 00:13:16.522 }, 00:13:16.522 { 00:13:16.522 "subsystem": "accel", 00:13:16.522 "config": [ 00:13:16.522 { 00:13:16.522 "method": "accel_set_options", 00:13:16.522 "params": { 00:13:16.522 "small_cache_size": 128, 00:13:16.522 "large_cache_size": 16, 00:13:16.522 "task_count": 2048, 00:13:16.522 "sequence_count": 2048, 00:13:16.522 "buf_count": 2048 00:13:16.522 } 00:13:16.522 } 00:13:16.523 ] 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "subsystem": "bdev", 00:13:16.523 "config": [ 00:13:16.523 { 00:13:16.523 "method": "bdev_set_options", 00:13:16.523 "params": { 00:13:16.523 "bdev_io_pool_size": 65535, 00:13:16.523 "bdev_io_cache_size": 256, 00:13:16.523 "bdev_auto_examine": true, 00:13:16.523 "iobuf_small_cache_size": 128, 00:13:16.523 "iobuf_large_cache_size": 16 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "bdev_raid_set_options", 00:13:16.523 "params": { 00:13:16.523 "process_window_size_kb": 1024, 00:13:16.523 "process_max_bandwidth_mb_sec": 0 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "bdev_iscsi_set_options", 00:13:16.523 "params": { 00:13:16.523 "timeout_sec": 30 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "bdev_nvme_set_options", 00:13:16.523 "params": { 00:13:16.523 "action_on_timeout": "none", 00:13:16.523 "timeout_us": 0, 00:13:16.523 "timeout_admin_us": 0, 00:13:16.523 "keep_alive_timeout_ms": 10000, 00:13:16.523 "arbitration_burst": 0, 00:13:16.523 "low_priority_weight": 0, 00:13:16.523 "medium_priority_weight": 0, 00:13:16.523 "high_priority_weight": 0, 00:13:16.523 "nvme_adminq_poll_period_us": 10000, 00:13:16.523 "nvme_ioq_poll_period_us": 0, 00:13:16.523 "io_queue_requests": 0, 00:13:16.523 "delay_cmd_submit": true, 00:13:16.523 "transport_retry_count": 4, 00:13:16.523 "bdev_retry_count": 3, 00:13:16.523 "transport_ack_timeout": 0, 00:13:16.523 "ctrlr_loss_timeout_sec": 0, 00:13:16.523 "reconnect_delay_sec": 0, 00:13:16.523 "fast_io_fail_timeout_sec": 0, 00:13:16.523 "disable_auto_failback": false, 00:13:16.523 "generate_uuids": false, 00:13:16.523 "transport_tos": 0, 00:13:16.523 "nvme_error_stat": false, 00:13:16.523 "rdma_srq_size": 0, 00:13:16.523 "io_path_stat": false, 00:13:16.523 "allow_accel_sequence": false, 00:13:16.523 "rdma_max_cq_size": 0, 00:13:16.523 "rdma_cm_event_timeout_ms": 0, 00:13:16.523 "dhchap_digests": [ 00:13:16.523 "sha256", 00:13:16.523 "sha384", 00:13:16.523 "sha512" 00:13:16.523 ], 00:13:16.523 "dhchap_dhgroups": [ 00:13:16.523 "null", 00:13:16.523 "ffdhe2048", 00:13:16.523 "ffdhe3072", 00:13:16.523 "ffdhe4096", 00:13:16.523 "ffdhe6144", 00:13:16.523 "ffdhe8192" 00:13:16.523 ] 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "bdev_nvme_set_hotplug", 00:13:16.523 "params": { 00:13:16.523 "period_us": 100000, 00:13:16.523 "enable": false 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "bdev_malloc_create", 00:13:16.523 "params": { 00:13:16.523 "name": "malloc0", 00:13:16.523 "num_blocks": 8192, 00:13:16.523 "block_size": 4096, 00:13:16.523 "physical_block_size": 4096, 00:13:16.523 "uuid": "9303f2f8-eb2c-4845-a3a9-c7c85864c85d", 00:13:16.523 "optimal_io_boundary": 0, 00:13:16.523 "md_size": 0, 00:13:16.523 "dif_type": 0, 00:13:16.523 "dif_is_head_of_md": false, 00:13:16.523 "dif_pi_format": 0 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "bdev_wait_for_examine" 00:13:16.523 } 00:13:16.523 ] 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "subsystem": "nbd", 00:13:16.523 "config": [] 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "subsystem": "scheduler", 00:13:16.523 "config": [ 00:13:16.523 { 00:13:16.523 "method": "framework_set_scheduler", 00:13:16.523 "params": { 00:13:16.523 "name": "static" 00:13:16.523 } 00:13:16.523 } 00:13:16.523 ] 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "subsystem": "nvmf", 00:13:16.523 "config": [ 00:13:16.523 { 00:13:16.523 "method": "nvmf_set_config", 00:13:16.523 "params": { 00:13:16.523 "discovery_filter": "match_any", 00:13:16.523 "admin_cmd_passthru": { 00:13:16.523 "identify_ctrlr": false 00:13:16.523 }, 00:13:16.523 "dhchap_digests": [ 00:13:16.523 "sha256", 00:13:16.523 "sha384", 00:13:16.523 "sha512" 00:13:16.523 ], 00:13:16.523 "dhchap_dhgroups": [ 00:13:16.523 "null", 00:13:16.523 "ffdhe2048", 00:13:16.523 "ffdhe3072", 00:13:16.523 "ffdhe4096", 00:13:16.523 "ffdhe6144", 00:13:16.523 "ffdhe8192" 00:13:16.523 ] 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_set_max_subsystems", 00:13:16.523 "params": { 00:13:16.523 "max_subsystems": 1024 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_set_crdt", 00:13:16.523 "params": { 00:13:16.523 "crdt1": 0, 00:13:16.523 "crdt2": 0, 00:13:16.523 "crdt3": 0 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_create_transport", 00:13:16.523 "params": { 00:13:16.523 "trtype": "TCP", 00:13:16.523 "max_queue_depth": 128, 00:13:16.523 "max_io_qpairs_per_ctrlr": 127, 00:13:16.523 "in_capsule_data_size": 4096, 00:13:16.523 "max_io_size": 131072, 00:13:16.523 "io_unit_size": 131072, 00:13:16.523 "max_aq_depth": 128, 00:13:16.523 "num_shared_buffers": 511, 00:13:16.523 "buf_cache_size": 4294967295, 00:13:16.523 "dif_insert_or_strip": false, 00:13:16.523 "zcopy": false, 00:13:16.523 "c2h_success": false, 00:13:16.523 "sock_priority": 0, 00:13:16.523 "abort_timeout_sec": 1, 00:13:16.523 "ack_timeout": 0, 00:13:16.523 "data_wr_pool_size": 0 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_create_subsystem", 00:13:16.523 "params": { 00:13:16.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.523 "allow_any_host": false, 00:13:16.523 "serial_number": "00000000000000000000", 00:13:16.523 "model_number": "SPDK bdev Controller", 00:13:16.523 "max_namespaces": 32, 00:13:16.523 "min_cntlid": 1, 00:13:16.523 "max_cntlid": 65519, 00:13:16.523 "ana_reporting": false 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_subsystem_add_host", 00:13:16.523 "params": { 00:13:16.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.523 "host": "nqn.2016-06.io.spdk:host1", 00:13:16.523 "psk": "key0" 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_subsystem_add_ns", 00:13:16.523 "params": { 00:13:16.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.523 "namespace": { 00:13:16.523 "nsid": 1, 00:13:16.523 "bdev_name": "malloc0", 00:13:16.523 "nguid": "9303F2F8EB2C4845A3A9C7C85864C85D", 00:13:16.523 "uuid": "9303f2f8-eb2c-4845-a3a9-c7c85864c85d", 00:13:16.523 "no_auto_visible": false 00:13:16.523 } 00:13:16.523 } 00:13:16.523 }, 00:13:16.523 { 00:13:16.523 "method": "nvmf_subsystem_add_listener", 00:13:16.523 "params": { 00:13:16.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.523 "listen_address": { 00:13:16.523 "trtype": "TCP", 00:13:16.523 "adrfam": "IPv4", 00:13:16.523 "traddr": "10.0.0.3", 00:13:16.523 "trsvcid": "4420" 00:13:16.523 }, 00:13:16.523 "secure_channel": false, 00:13:16.523 "sock_impl": "ssl" 00:13:16.523 } 00:13:16.523 } 00:13:16.523 ] 00:13:16.523 } 00:13:16.523 ] 00:13:16.523 }' 00:13:16.523 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:16.783 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:13:16.783 "subsystems": [ 00:13:16.783 { 00:13:16.783 "subsystem": "keyring", 00:13:16.783 "config": [ 00:13:16.783 { 00:13:16.783 "method": "keyring_file_add_key", 00:13:16.783 "params": { 00:13:16.783 "name": "key0", 00:13:16.783 "path": "/tmp/tmp.CyTxKSqpAN" 00:13:16.783 } 00:13:16.783 } 00:13:16.783 ] 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "subsystem": "iobuf", 00:13:16.783 "config": [ 00:13:16.783 { 00:13:16.783 "method": "iobuf_set_options", 00:13:16.783 "params": { 00:13:16.783 "small_pool_count": 8192, 00:13:16.783 "large_pool_count": 1024, 00:13:16.783 "small_bufsize": 8192, 00:13:16.783 "large_bufsize": 135168, 00:13:16.783 "enable_numa": false 00:13:16.783 } 00:13:16.783 } 00:13:16.783 ] 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "subsystem": "sock", 00:13:16.783 "config": [ 00:13:16.783 { 00:13:16.783 "method": "sock_set_default_impl", 00:13:16.783 "params": { 00:13:16.783 "impl_name": "uring" 00:13:16.783 } 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "method": "sock_impl_set_options", 00:13:16.783 "params": { 00:13:16.783 "impl_name": "ssl", 00:13:16.783 "recv_buf_size": 4096, 00:13:16.783 "send_buf_size": 4096, 00:13:16.783 "enable_recv_pipe": true, 00:13:16.783 "enable_quickack": false, 00:13:16.783 "enable_placement_id": 0, 00:13:16.783 "enable_zerocopy_send_server": true, 00:13:16.783 "enable_zerocopy_send_client": false, 00:13:16.783 "zerocopy_threshold": 0, 00:13:16.783 "tls_version": 0, 00:13:16.783 "enable_ktls": false 00:13:16.783 } 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "method": "sock_impl_set_options", 00:13:16.783 "params": { 00:13:16.783 "impl_name": "posix", 00:13:16.783 "recv_buf_size": 2097152, 00:13:16.783 "send_buf_size": 2097152, 00:13:16.783 "enable_recv_pipe": true, 00:13:16.783 "enable_quickack": false, 00:13:16.783 "enable_placement_id": 0, 00:13:16.783 "enable_zerocopy_send_server": true, 00:13:16.783 "enable_zerocopy_send_client": false, 00:13:16.783 "zerocopy_threshold": 0, 00:13:16.783 "tls_version": 0, 00:13:16.783 "enable_ktls": false 00:13:16.783 } 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "method": "sock_impl_set_options", 00:13:16.783 "params": { 00:13:16.783 "impl_name": "uring", 00:13:16.783 "recv_buf_size": 2097152, 00:13:16.783 "send_buf_size": 2097152, 00:13:16.783 "enable_recv_pipe": true, 00:13:16.783 "enable_quickack": false, 00:13:16.783 "enable_placement_id": 0, 00:13:16.783 "enable_zerocopy_send_server": false, 00:13:16.783 "enable_zerocopy_send_client": false, 00:13:16.783 "zerocopy_threshold": 0, 00:13:16.783 "tls_version": 0, 00:13:16.783 "enable_ktls": false 00:13:16.783 } 00:13:16.783 } 00:13:16.783 ] 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "subsystem": "vmd", 00:13:16.783 "config": [] 00:13:16.783 }, 00:13:16.783 { 00:13:16.783 "subsystem": "accel", 00:13:16.783 "config": [ 00:13:16.783 { 00:13:16.783 "method": "accel_set_options", 00:13:16.783 "params": { 00:13:16.783 "small_cache_size": 128, 00:13:16.783 "large_cache_size": 16, 00:13:16.783 "task_count": 2048, 00:13:16.783 "sequence_count": 2048, 00:13:16.783 "buf_count": 2048 00:13:16.783 } 00:13:16.783 } 00:13:16.783 ] 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "subsystem": "bdev", 00:13:16.784 "config": [ 00:13:16.784 { 00:13:16.784 "method": "bdev_set_options", 00:13:16.784 "params": { 00:13:16.784 "bdev_io_pool_size": 65535, 00:13:16.784 "bdev_io_cache_size": 256, 00:13:16.784 "bdev_auto_examine": true, 00:13:16.784 "iobuf_small_cache_size": 128, 00:13:16.784 "iobuf_large_cache_size": 16 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_raid_set_options", 00:13:16.784 "params": { 00:13:16.784 "process_window_size_kb": 1024, 00:13:16.784 "process_max_bandwidth_mb_sec": 0 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_iscsi_set_options", 00:13:16.784 "params": { 00:13:16.784 "timeout_sec": 30 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_nvme_set_options", 00:13:16.784 "params": { 00:13:16.784 "action_on_timeout": "none", 00:13:16.784 "timeout_us": 0, 00:13:16.784 "timeout_admin_us": 0, 00:13:16.784 "keep_alive_timeout_ms": 10000, 00:13:16.784 "arbitration_burst": 0, 00:13:16.784 "low_priority_weight": 0, 00:13:16.784 "medium_priority_weight": 0, 00:13:16.784 "high_priority_weight": 0, 00:13:16.784 "nvme_adminq_poll_period_us": 10000, 00:13:16.784 "nvme_ioq_poll_period_us": 0, 00:13:16.784 "io_queue_requests": 512, 00:13:16.784 "delay_cmd_submit": true, 00:13:16.784 "transport_retry_count": 4, 00:13:16.784 "bdev_retry_count": 3, 00:13:16.784 "transport_ack_timeout": 0, 00:13:16.784 "ctrlr_loss_timeout_sec": 0, 00:13:16.784 "reconnect_delay_sec": 0, 00:13:16.784 "fast_io_fail_timeout_sec": 0, 00:13:16.784 "disable_auto_failback": false, 00:13:16.784 "generate_uuids": false, 00:13:16.784 "transport_tos": 0, 00:13:16.784 "nvme_error_stat": false, 00:13:16.784 "rdma_srq_size": 0, 00:13:16.784 "io_path_stat": false, 00:13:16.784 "allow_accel_sequence": false, 00:13:16.784 "rdma_max_cq_size": 0, 00:13:16.784 "rdma_cm_event_timeout_ms": 0, 00:13:16.784 "dhchap_digests": [ 00:13:16.784 "sha256", 00:13:16.784 "sha384", 00:13:16.784 "sha512" 00:13:16.784 ], 00:13:16.784 "dhchap_dhgroups": [ 00:13:16.784 "null", 00:13:16.784 "ffdhe2048", 00:13:16.784 "ffdhe3072", 00:13:16.784 "ffdhe4096", 00:13:16.784 "ffdhe6144", 00:13:16.784 "ffdhe8192" 00:13:16.784 ] 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_nvme_attach_controller", 00:13:16.784 "params": { 00:13:16.784 "name": "nvme0", 00:13:16.784 "trtype": "TCP", 00:13:16.784 "adrfam": "IPv4", 00:13:16.784 "traddr": "10.0.0.3", 00:13:16.784 "trsvcid": "4420", 00:13:16.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.784 "prchk_reftag": false, 00:13:16.784 "prchk_guard": false, 00:13:16.784 "ctrlr_loss_timeout_sec": 0, 00:13:16.784 "reconnect_delay_sec": 0, 00:13:16.784 "fast_io_fail_timeout_sec": 0, 00:13:16.784 "psk": "key0", 00:13:16.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:16.784 "hdgst": false, 00:13:16.784 "ddgst": false, 00:13:16.784 "multipath": "multipath" 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_nvme_set_hotplug", 00:13:16.784 "params": { 00:13:16.784 "period_us": 100000, 00:13:16.784 "enable": false 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_enable_histogram", 00:13:16.784 "params": { 00:13:16.784 "name": "nvme0n1", 00:13:16.784 "enable": true 00:13:16.784 } 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "method": "bdev_wait_for_examine" 00:13:16.784 } 00:13:16.784 ] 00:13:16.784 }, 00:13:16.784 { 00:13:16.784 "subsystem": "nbd", 00:13:16.784 "config": [] 00:13:16.784 } 00:13:16.784 ] 00:13:16.784 }' 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 71802 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71802 ']' 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71802 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71802 00:13:16.784 killing process with pid 71802 00:13:16.784 Received shutdown signal, test time was about 1.000000 seconds 00:13:16.784 00:13:16.784 Latency(us) 00:13:16.784 [2024-12-06T12:21:03.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.784 [2024-12-06T12:21:03.442Z] =================================================================================================================== 00:13:16.784 [2024-12-06T12:21:03.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71802' 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71802 00:13:16.784 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71802 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 71770 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71770 ']' 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71770 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71770 00:13:17.044 killing process with pid 71770 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71770' 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71770 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71770 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:13:17.044 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:13:17.044 "subsystems": [ 00:13:17.044 { 00:13:17.044 "subsystem": "keyring", 00:13:17.044 "config": [ 00:13:17.044 { 00:13:17.044 "method": "keyring_file_add_key", 00:13:17.044 "params": { 00:13:17.044 "name": "key0", 00:13:17.044 "path": "/tmp/tmp.CyTxKSqpAN" 00:13:17.044 } 00:13:17.044 } 00:13:17.044 ] 00:13:17.044 }, 00:13:17.044 { 00:13:17.044 "subsystem": "iobuf", 00:13:17.044 "config": [ 00:13:17.044 { 00:13:17.044 "method": "iobuf_set_options", 00:13:17.044 "params": { 00:13:17.044 "small_pool_count": 8192, 00:13:17.044 "large_pool_count": 1024, 00:13:17.044 "small_bufsize": 8192, 00:13:17.044 "large_bufsize": 135168, 00:13:17.044 "enable_numa": false 00:13:17.044 } 00:13:17.044 } 00:13:17.044 ] 00:13:17.044 }, 00:13:17.044 { 00:13:17.044 "subsystem": "sock", 00:13:17.044 "config": [ 00:13:17.044 { 00:13:17.044 "method": "sock_set_default_impl", 00:13:17.044 "params": { 00:13:17.044 "impl_name": "uring" 00:13:17.044 } 00:13:17.044 }, 00:13:17.044 { 00:13:17.044 "method": "sock_impl_set_options", 00:13:17.044 "params": { 00:13:17.044 "impl_name": "ssl", 00:13:17.044 "recv_buf_size": 4096, 00:13:17.044 "send_buf_size": 4096, 00:13:17.044 "enable_recv_pipe": true, 00:13:17.044 "enable_quickack": false, 00:13:17.044 "enable_placement_id": 0, 00:13:17.044 "enable_zerocopy_send_server": true, 00:13:17.044 "enable_zerocopy_send_client": false, 00:13:17.044 "zerocopy_threshold": 0, 00:13:17.044 "tls_version": 0, 00:13:17.044 "enable_ktls": false 00:13:17.044 } 00:13:17.044 }, 00:13:17.044 { 00:13:17.044 "method": "sock_impl_set_options", 00:13:17.044 "params": { 00:13:17.044 "impl_name": "posix", 00:13:17.044 "recv_buf_size": 2097152, 00:13:17.044 "send_buf_size": 2097152, 00:13:17.044 "enable_recv_pipe": true, 00:13:17.044 "enable_quickack": false, 00:13:17.044 "enable_placement_id": 0, 00:13:17.044 "enable_zerocopy_send_server": true, 00:13:17.044 "enable_zerocopy_send_client": false, 00:13:17.044 "zerocopy_threshold": 0, 00:13:17.044 "tls_version": 0, 00:13:17.044 "enable_ktls": false 00:13:17.044 } 00:13:17.044 }, 00:13:17.044 { 00:13:17.044 "method": "sock_impl_set_options", 00:13:17.044 "params": { 00:13:17.044 "impl_name": "uring", 00:13:17.044 "recv_buf_size": 2097152, 00:13:17.045 "send_buf_size": 2097152, 00:13:17.045 "enable_recv_pipe": true, 00:13:17.045 "enable_quickack": false, 00:13:17.045 "enable_placement_id": 0, 00:13:17.045 "enable_zerocopy_send_server": false, 00:13:17.045 "enable_zerocopy_send_client": false, 00:13:17.045 "zerocopy_threshold": 0, 00:13:17.045 "tls_version": 0, 00:13:17.045 "enable_ktls": false 00:13:17.045 } 00:13:17.045 } 00:13:17.045 ] 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "subsystem": "vmd", 00:13:17.045 "config": [] 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "subsystem": "accel", 00:13:17.045 "config": [ 00:13:17.045 { 00:13:17.045 "method": "accel_set_options", 00:13:17.045 "params": { 00:13:17.045 "small_cache_size": 128, 00:13:17.045 "large_cache_size": 16, 00:13:17.045 "task_count": 2048, 00:13:17.045 "sequence_count": 2048, 00:13:17.045 "buf_count": 2048 00:13:17.045 } 00:13:17.045 } 00:13:17.045 ] 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "subsystem": "bdev", 00:13:17.045 "config": [ 00:13:17.045 { 00:13:17.045 "method": "bdev_set_options", 00:13:17.045 "params": { 00:13:17.045 "bdev_io_pool_size": 65535, 00:13:17.045 "bdev_io_cache_size": 256, 00:13:17.045 "bdev_auto_examine": true, 00:13:17.045 "iobuf_small_cache_size": 128, 00:13:17.045 "iobuf_large_cache_size": 16 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "bdev_raid_set_options", 00:13:17.045 "params": { 00:13:17.045 "process_window_size_kb": 1024, 00:13:17.045 "process_max_bandwidth_mb_sec": 0 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "bdev_iscsi_set_options", 00:13:17.045 "params": { 00:13:17.045 "timeout_sec": 30 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "bdev_nvme_set_options", 00:13:17.045 "params": { 00:13:17.045 "action_on_timeout": "none", 00:13:17.045 "timeout_us": 0, 00:13:17.045 "timeout_admin_us": 0, 00:13:17.045 "keep_alive_timeout_ms": 10000, 00:13:17.045 "arbitration_burst": 0, 00:13:17.045 "low_priority_weight": 0, 00:13:17.045 "medium_priority_weight": 0, 00:13:17.045 "high_priority_weight": 0, 00:13:17.045 "nvme_adminq_poll_period_us": 10000, 00:13:17.045 "nvme_ioq_poll_period_us": 0, 00:13:17.045 "io_queue_requests": 0, 00:13:17.045 "delay_cmd_submit": true, 00:13:17.045 "transport_retry_count": 4, 00:13:17.045 "bdev_retry_count": 3, 00:13:17.045 "transport_ack_timeout": 0, 00:13:17.045 "ctrlr_loss_timeout_sec": 0, 00:13:17.045 "reconnect_delay_sec": 0, 00:13:17.045 "fast_io_fail_timeout_sec": 0, 00:13:17.045 "disable_auto_failback": false, 00:13:17.045 "generate_uuids": false, 00:13:17.045 "transport_tos": 0, 00:13:17.045 "nvme_error_stat": false, 00:13:17.045 "rdma_srq_size": 0, 00:13:17.045 "io_path_stat": false, 00:13:17.045 "allow_accel_sequence": false, 00:13:17.045 "rdma_max_cq_size": 0, 00:13:17.045 "rdma_cm_event_timeout_ms": 0, 00:13:17.045 "dhchap_digests": [ 00:13:17.045 "sha256", 00:13:17.045 "sha384", 00:13:17.045 "sha512" 00:13:17.045 ], 00:13:17.045 "dhchap_dhgroups": [ 00:13:17.045 "null", 00:13:17.045 "ffdhe2048", 00:13:17.045 "ffdhe3072", 00:13:17.045 "ffdhe4096", 00:13:17.045 "ffdhe6144", 00:13:17.045 "ffdhe8192" 00:13:17.045 ] 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "bdev_nvme_set_hotplug", 00:13:17.045 "params": { 00:13:17.045 "period_us": 100000, 00:13:17.045 "enable": false 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "bdev_malloc_create", 00:13:17.045 "params": { 00:13:17.045 "name": "malloc0", 00:13:17.045 "num_blocks": 8192, 00:13:17.045 "block_size": 4096, 00:13:17.045 "physical_block_size": 4096, 00:13:17.045 "uuid": "9303f2f8-eb2c-4845-a3a9-c7c85864c85d", 00:13:17.045 "optimal_io_boundary": 0, 00:13:17.045 "md_size": 0, 00:13:17.045 "dif_type": 0, 00:13:17.045 "dif_is_head_of_md": false, 00:13:17.045 "dif_pi_format": 0 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "bdev_wait_for_examine" 00:13:17.045 } 00:13:17.045 ] 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "subsystem": "nbd", 00:13:17.045 "config": [] 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "subsystem": "scheduler", 00:13:17.045 "config": [ 00:13:17.045 { 00:13:17.045 "method": "framework_set_scheduler", 00:13:17.045 "params": { 00:13:17.045 "name": "static" 00:13:17.045 } 00:13:17.045 } 00:13:17.045 ] 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "subsystem": "nvmf", 00:13:17.045 "config": [ 00:13:17.045 { 00:13:17.045 "method": "nvmf_set_config", 00:13:17.045 "params": { 00:13:17.045 "discovery_filter": "match_any", 00:13:17.045 "admin_cmd_passthru": { 00:13:17.045 "identify_ctrlr": false 00:13:17.045 }, 00:13:17.045 "dhchap_digests": [ 00:13:17.045 "sha256", 00:13:17.045 "sha384", 00:13:17.045 "sha512" 00:13:17.045 ], 00:13:17.045 "dhchap_dhgroups": [ 00:13:17.045 "null", 00:13:17.045 "ffdhe2048", 00:13:17.045 "ffdhe3072", 00:13:17.045 "ffdhe4096", 00:13:17.045 "ffdhe6144", 00:13:17.045 "ffdhe8192" 00:13:17.045 ] 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_set_max_subsystems", 00:13:17.045 "params": { 00:13:17.045 "max_subsystems": 1024 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_set_crdt", 00:13:17.045 "params": { 00:13:17.045 "crdt1": 0, 00:13:17.045 "crdt2": 0, 00:13:17.045 "crdt3": 0 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_create_transport", 00:13:17.045 "params": { 00:13:17.045 "trtype": "TCP", 00:13:17.045 "max_queue_depth": 128, 00:13:17.045 "max_io_qpairs_per_ctrlr": 127, 00:13:17.045 "in_capsule_data_size": 4096, 00:13:17.045 "max_io_size": 131072, 00:13:17.045 "io_unit_size": 131072, 00:13:17.045 "max_aq_depth": 128, 00:13:17.045 "num_shared_buffers": 511, 00:13:17.045 "buf_cache_size": 4294967295, 00:13:17.045 "dif_insert_or_strip": false, 00:13:17.045 "zcopy": false, 00:13:17.045 "c2h_success": false, 00:13:17.045 "sock_priority": 0, 00:13:17.045 "abort_timeout_sec": 1, 00:13:17.045 "ack_timeout": 0, 00:13:17.045 "data_wr_pool_size": 0 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_create_subsystem", 00:13:17.045 "params": { 00:13:17.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.045 "allow_any_host": false, 00:13:17.045 "serial_number": "00000000000000000000", 00:13:17.045 "model_number": "SPDK bdev Controller", 00:13:17.045 "max_namespaces": 32, 00:13:17.045 "min_cntlid": 1, 00:13:17.045 "max_cntlid": 65519, 00:13:17.045 "ana_reporting": false 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_subsystem_add_host", 00:13:17.045 "params": { 00:13:17.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.045 "host": "nqn.2016-06.io.spdk:host1", 00:13:17.045 "psk": "key0" 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_subsystem_add_ns", 00:13:17.045 "params": { 00:13:17.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.045 "namespace": { 00:13:17.045 "nsid": 1, 00:13:17.045 "bdev_name": "malloc0", 00:13:17.045 "nguid": "9303F2F8EB2C4845A3A9C7C85864C85D", 00:13:17.045 "uuid": "9303f2f8-eb2c-4845-a3a9-c7c85864c85d", 00:13:17.045 "no_auto_visible": false 00:13:17.045 } 00:13:17.045 } 00:13:17.045 }, 00:13:17.045 { 00:13:17.045 "method": "nvmf_subsystem_add_listener", 00:13:17.045 "params": { 00:13:17.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.045 "listen_address": { 00:13:17.045 "trtype": "TCP", 00:13:17.045 "adrfam": "IPv4", 00:13:17.045 "traddr": "10.0.0.3", 00:13:17.045 "trsvcid": "4420" 00:13:17.045 }, 00:13:17.045 "secure_channel": false, 00:13:17.045 "sock_impl": "ssl" 00:13:17.045 } 00:13:17.045 } 00:13:17.045 ] 00:13:17.045 } 00:13:17.045 ] 00:13:17.045 }' 00:13:17.045 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.045 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.045 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.045 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71850 00:13:17.045 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:17.045 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71850 00:13:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.046 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71850 ']' 00:13:17.046 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.046 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.046 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.046 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.046 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.305 [2024-12-06 12:21:03.707785] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:17.305 [2024-12-06 12:21:03.708019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.305 [2024-12-06 12:21:03.850244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.305 [2024-12-06 12:21:03.876423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.305 [2024-12-06 12:21:03.876698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.305 [2024-12-06 12:21:03.876716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.305 [2024-12-06 12:21:03.876724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.305 [2024-12-06 12:21:03.876731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.305 [2024-12-06 12:21:03.877068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.565 [2024-12-06 12:21:04.019031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.565 [2024-12-06 12:21:04.077071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.565 [2024-12-06 12:21:04.109037] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:17.565 [2024-12-06 12:21:04.109243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=71882 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 71882 /var/tmp/bdevperf.sock 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71882 ']' 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.134 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:13:18.134 "subsystems": [ 00:13:18.134 { 00:13:18.134 "subsystem": "keyring", 00:13:18.134 "config": [ 00:13:18.134 { 00:13:18.134 "method": "keyring_file_add_key", 00:13:18.134 "params": { 00:13:18.134 "name": "key0", 00:13:18.134 "path": "/tmp/tmp.CyTxKSqpAN" 00:13:18.134 } 00:13:18.134 } 00:13:18.134 ] 00:13:18.134 }, 00:13:18.134 { 00:13:18.134 "subsystem": "iobuf", 00:13:18.134 "config": [ 00:13:18.134 { 00:13:18.134 "method": "iobuf_set_options", 00:13:18.134 "params": { 00:13:18.134 "small_pool_count": 8192, 00:13:18.134 "large_pool_count": 1024, 00:13:18.134 "small_bufsize": 8192, 00:13:18.134 "large_bufsize": 135168, 00:13:18.134 "enable_numa": false 00:13:18.134 } 00:13:18.135 } 00:13:18.135 ] 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "subsystem": "sock", 00:13:18.135 "config": [ 00:13:18.135 { 00:13:18.135 "method": "sock_set_default_impl", 00:13:18.135 "params": { 00:13:18.135 "impl_name": "uring" 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "sock_impl_set_options", 00:13:18.135 "params": { 00:13:18.135 "impl_name": "ssl", 00:13:18.135 "recv_buf_size": 4096, 00:13:18.135 "send_buf_size": 4096, 00:13:18.135 "enable_recv_pipe": true, 00:13:18.135 "enable_quickack": false, 00:13:18.135 "enable_placement_id": 0, 00:13:18.135 "enable_zerocopy_send_server": true, 00:13:18.135 "enable_zerocopy_send_client": false, 00:13:18.135 "zerocopy_threshold": 0, 00:13:18.135 "tls_version": 0, 00:13:18.135 "enable_ktls": false 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "sock_impl_set_options", 00:13:18.135 "params": { 00:13:18.135 "impl_name": "posix", 00:13:18.135 "recv_buf_size": 2097152, 00:13:18.135 "send_buf_size": 2097152, 00:13:18.135 "enable_recv_pipe": true, 00:13:18.135 "enable_quickack": false, 00:13:18.135 "enable_placement_id": 0, 00:13:18.135 "enable_zerocopy_send_server": true, 00:13:18.135 "enable_zerocopy_send_client": false, 00:13:18.135 "zerocopy_threshold": 0, 00:13:18.135 "tls_version": 0, 00:13:18.135 "enable_ktls": false 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "sock_impl_set_options", 00:13:18.135 "params": { 00:13:18.135 "impl_name": "uring", 00:13:18.135 "recv_buf_size": 2097152, 00:13:18.135 "send_buf_size": 2097152, 00:13:18.135 "enable_recv_pipe": true, 00:13:18.135 "enable_quickack": false, 00:13:18.135 "enable_placement_id": 0, 00:13:18.135 "enable_zerocopy_send_server": false, 00:13:18.135 "enable_zerocopy_send_client": false, 00:13:18.135 "zerocopy_threshold": 0, 00:13:18.135 "tls_version": 0, 00:13:18.135 "enable_ktls": false 00:13:18.135 } 00:13:18.135 } 00:13:18.135 ] 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "subsystem": "vmd", 00:13:18.135 "config": [] 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "subsystem": "accel", 00:13:18.135 "config": [ 00:13:18.135 { 00:13:18.135 "method": "accel_set_options", 00:13:18.135 "params": { 00:13:18.135 "small_cache_size": 128, 00:13:18.135 "large_cache_size": 16, 00:13:18.135 "task_count": 2048, 00:13:18.135 "sequence_count": 2048, 00:13:18.135 "buf_count": 2048 00:13:18.135 } 00:13:18.135 } 00:13:18.135 ] 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "subsystem": "bdev", 00:13:18.135 "config": [ 00:13:18.135 { 00:13:18.135 "method": "bdev_set_options", 00:13:18.135 "params": { 00:13:18.135 "bdev_io_pool_size": 65535, 00:13:18.135 "bdev_io_cache_size": 256, 00:13:18.135 "bdev_auto_examine": true, 00:13:18.135 "iobuf_small_cache_size": 128, 00:13:18.135 "iobuf_large_cache_size": 16 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "bdev_raid_set_options", 00:13:18.135 "params": { 00:13:18.135 "process_window_size_kb": 1024, 00:13:18.135 "process_max_bandwidth_mb_sec": 0 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "bdev_iscsi_set_options", 00:13:18.135 "params": { 00:13:18.135 "timeout_sec": 30 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "bdev_nvme_set_options", 00:13:18.135 "params": { 00:13:18.135 "action_on_timeout": "none", 00:13:18.135 "timeout_us": 0, 00:13:18.135 "timeout_admin_us": 0, 00:13:18.135 "keep_alive_timeout_ms": 10000, 00:13:18.135 "arbitration_burst": 0, 00:13:18.135 "low_priority_weight": 0, 00:13:18.135 "medium_priority_weight": 0, 00:13:18.135 "high_priority_weight": 0, 00:13:18.135 "nvme_adminq_poll_period_us": 10000, 00:13:18.135 "nvme_ioq_poll_period_us": 0, 00:13:18.135 "io_queue_requests": 512, 00:13:18.135 "delay_cmd_submit": true, 00:13:18.135 "transport_retry_count": 4, 00:13:18.135 "bdev_retry_count": 3, 00:13:18.135 "transport_ack_timeout": 0, 00:13:18.135 "ctrlr_loss_timeout_sec": 0, 00:13:18.135 "reconnect_delay_sec": 0, 00:13:18.135 "fast_io_fail_timeout_sec": 0, 00:13:18.135 "disable_auto_failback": false, 00:13:18.135 "generate_uuids": false, 00:13:18.135 "transport_tos": 0, 00:13:18.135 "nvme_error_stat": false, 00:13:18.135 "rdma_srq_size": 0, 00:13:18.135 "io_path_stat": false, 00:13:18.135 "allow_accel_sequence": false, 00:13:18.135 "rdma_max_cq_size": 0, 00:13:18.135 "rdma_cm_event_timeout_ms": 0, 00:13:18.135 "dhchap_digests": [ 00:13:18.135 "sha256", 00:13:18.135 "sha384", 00:13:18.135 "sha512" 00:13:18.135 ], 00:13:18.135 "dhchap_dhgroups": [ 00:13:18.135 "null", 00:13:18.135 "ffdhe2048", 00:13:18.135 "ffdhe3072", 00:13:18.135 "ffdhe4096", 00:13:18.135 "ffdhe6144", 00:13:18.135 "ffdhe8192" 00:13:18.135 ] 00:13:18.135 } 00:13:18.135 }, 00:13:18.135 { 00:13:18.135 "method": "bdev_nvme_attach_controller", 00:13:18.135 "params": { 00:13:18.135 "name": "nvme0", 00:13:18.135 "trtype": "TCP", 00:13:18.135 "adrfam": "IPv4", 00:13:18.135 "traddr": "10.0.0.3", 00:13:18.135 "trsvcid": "4420", 00:13:18.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.135 "prchk_reftag": false, 00:13:18.135 "prchk_guard": false, 00:13:18.135 "ctrlr_loss_timeout_sec": 0, 00:13:18.135 "reconnect_delay_sec": 0, 00:13:18.135 "fast_io_fail_timeout_sec": 0, 00:13:18.136 "psk": "key0", 00:13:18.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.136 "hdgst": false, 00:13:18.136 "ddgst": false, 00:13:18.136 "multipath": "multipath" 00:13:18.136 } 00:13:18.136 }, 00:13:18.136 { 00:13:18.136 "method": "bdev_nvme_set_hotplug", 00:13:18.136 "params": { 00:13:18.136 "period_us": 100000, 00:13:18.136 "enable": false 00:13:18.136 } 00:13:18.136 }, 00:13:18.136 { 00:13:18.136 "method": "bdev_enable_histogram", 00:13:18.136 "params": { 00:13:18.136 "name": "nvme0n1", 00:13:18.136 "enable": true 00:13:18.136 } 00:13:18.136 }, 00:13:18.136 { 00:13:18.136 "method": "bdev_wait_for_examine" 00:13:18.136 } 00:13:18.136 ] 00:13:18.136 }, 00:13:18.136 { 00:13:18.136 "subsystem": "nbd", 00:13:18.136 "config": [] 00:13:18.136 } 00:13:18.136 ] 00:13:18.136 }' 00:13:18.136 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.136 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.136 [2024-12-06 12:21:04.775123] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:18.136 [2024-12-06 12:21:04.775429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71882 ] 00:13:18.395 [2024-12-06 12:21:04.925076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.395 [2024-12-06 12:21:04.964376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.655 [2024-12-06 12:21:05.077073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.655 [2024-12-06 12:21:05.108627] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:19.219 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.219 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:19.219 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:19.219 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:13:19.477 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.477 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:19.477 Running I/O for 1 seconds... 00:13:20.853 4761.00 IOPS, 18.60 MiB/s 00:13:20.853 Latency(us) 00:13:20.853 [2024-12-06T12:21:07.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.854 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:20.854 Verification LBA range: start 0x0 length 0x2000 00:13:20.854 nvme0n1 : 1.02 4802.01 18.76 0.00 0.00 26342.36 2442.71 17635.14 00:13:20.854 [2024-12-06T12:21:07.512Z] =================================================================================================================== 00:13:20.854 [2024-12-06T12:21:07.512Z] Total : 4802.01 18.76 0.00 0.00 26342.36 2442.71 17635.14 00:13:20.854 { 00:13:20.854 "results": [ 00:13:20.854 { 00:13:20.854 "job": "nvme0n1", 00:13:20.854 "core_mask": "0x2", 00:13:20.854 "workload": "verify", 00:13:20.854 "status": "finished", 00:13:20.854 "verify_range": { 00:13:20.854 "start": 0, 00:13:20.854 "length": 8192 00:13:20.854 }, 00:13:20.854 "queue_depth": 128, 00:13:20.854 "io_size": 4096, 00:13:20.854 "runtime": 1.018115, 00:13:20.854 "iops": 4802.011560580092, 00:13:20.854 "mibps": 18.757857658515984, 00:13:20.854 "io_failed": 0, 00:13:20.854 "io_timeout": 0, 00:13:20.854 "avg_latency_us": 26342.362394243108, 00:13:20.854 "min_latency_us": 2442.7054545454544, 00:13:20.854 "max_latency_us": 17635.14181818182 00:13:20.854 } 00:13:20.854 ], 00:13:20.854 "core_count": 1 00:13:20.854 } 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:20.854 nvmf_trace.0 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 71882 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71882 ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71882 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71882 00:13:20.854 killing process with pid 71882 00:13:20.854 Received shutdown signal, test time was about 1.000000 seconds 00:13:20.854 00:13:20.854 Latency(us) 00:13:20.854 [2024-12-06T12:21:07.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.854 [2024-12-06T12:21:07.512Z] =================================================================================================================== 00:13:20.854 [2024-12-06T12:21:07.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71882' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71882 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71882 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.854 rmmod nvme_tcp 00:13:20.854 rmmod nvme_fabrics 00:13:20.854 rmmod nvme_keyring 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71850 ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71850 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71850 ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71850 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.854 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71850 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71850' 00:13:21.113 killing process with pid 71850 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71850 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71850 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:21.113 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9q2xGetvlB /tmp/tmp.6DLrBNSnHD /tmp/tmp.CyTxKSqpAN 00:13:21.372 00:13:21.372 real 1m21.790s 00:13:21.372 user 2m12.648s 00:13:21.372 sys 0m25.864s 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 ************************************ 00:13:21.372 END TEST nvmf_tls 00:13:21.372 ************************************ 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 ************************************ 00:13:21.372 START TEST nvmf_fips 00:13:21.372 ************************************ 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:21.372 * Looking for test storage... 00:13:21.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:13:21.372 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.633 --rc genhtml_branch_coverage=1 00:13:21.633 --rc genhtml_function_coverage=1 00:13:21.633 --rc genhtml_legend=1 00:13:21.633 --rc geninfo_all_blocks=1 00:13:21.633 --rc geninfo_unexecuted_blocks=1 00:13:21.633 00:13:21.633 ' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.633 --rc genhtml_branch_coverage=1 00:13:21.633 --rc genhtml_function_coverage=1 00:13:21.633 --rc genhtml_legend=1 00:13:21.633 --rc geninfo_all_blocks=1 00:13:21.633 --rc geninfo_unexecuted_blocks=1 00:13:21.633 00:13:21.633 ' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.633 --rc genhtml_branch_coverage=1 00:13:21.633 --rc genhtml_function_coverage=1 00:13:21.633 --rc genhtml_legend=1 00:13:21.633 --rc geninfo_all_blocks=1 00:13:21.633 --rc geninfo_unexecuted_blocks=1 00:13:21.633 00:13:21.633 ' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.633 --rc genhtml_branch_coverage=1 00:13:21.633 --rc genhtml_function_coverage=1 00:13:21.633 --rc genhtml_legend=1 00:13:21.633 --rc geninfo_all_blocks=1 00:13:21.633 --rc geninfo_unexecuted_blocks=1 00:13:21.633 00:13:21.633 ' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.633 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:21.634 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:13:21.635 Error setting digest 00:13:21.635 40129A1E4B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:21.635 40129A1E4B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:21.635 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:21.895 Cannot find device "nvmf_init_br" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:21.895 Cannot find device "nvmf_init_br2" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:21.895 Cannot find device "nvmf_tgt_br" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:21.895 Cannot find device "nvmf_tgt_br2" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:21.895 Cannot find device "nvmf_init_br" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:21.895 Cannot find device "nvmf_init_br2" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:21.895 Cannot find device "nvmf_tgt_br" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:21.895 Cannot find device "nvmf_tgt_br2" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:21.895 Cannot find device "nvmf_br" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:21.895 Cannot find device "nvmf_init_if" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:21.895 Cannot find device "nvmf_init_if2" 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:21.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:21.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:21.895 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:22.154 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:22.154 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:22.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:22.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:22.155 00:13:22.155 --- 10.0.0.3 ping statistics --- 00:13:22.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.155 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:22.155 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:22.155 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:13:22.155 00:13:22.155 --- 10.0.0.4 ping statistics --- 00:13:22.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.155 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:22.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:22.155 00:13:22.155 --- 10.0.0.1 ping statistics --- 00:13:22.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.155 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:22.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:13:22.155 00:13:22.155 --- 10.0.0.2 ping statistics --- 00:13:22.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.155 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72201 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72201 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72201 ']' 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.155 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 [2024-12-06 12:21:08.714009] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:22.155 [2024-12-06 12:21:08.714082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.415 [2024-12-06 12:21:08.849957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.415 [2024-12-06 12:21:08.879048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.415 [2024-12-06 12:21:08.879110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.415 [2024-12-06 12:21:08.879120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.415 [2024-12-06 12:21:08.879127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.415 [2024-12-06 12:21:08.879133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.415 [2024-12-06 12:21:08.879455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.415 [2024-12-06 12:21:08.907895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:22.415 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.415 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:22.415 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.415 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.415 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.20V 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.20V 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.20V 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.20V 00:13:22.415 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.674 [2024-12-06 12:21:09.306623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.674 [2024-12-06 12:21:09.322589] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:22.674 [2024-12-06 12:21:09.322757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:22.933 malloc0 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72234 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72234 /var/tmp/bdevperf.sock 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72234 ']' 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.933 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:22.933 [2024-12-06 12:21:09.448058] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:22.933 [2024-12-06 12:21:09.448142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:13:22.933 [2024-12-06 12:21:09.586829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.192 [2024-12-06 12:21:09.617406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.193 [2024-12-06 12:21:09.645293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:23.761 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.761 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:23.761 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.20V 00:13:24.021 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:24.280 [2024-12-06 12:21:10.815834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:24.280 TLSTESTn1 00:13:24.280 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:24.539 Running I/O for 10 seconds... 00:13:26.412 4555.00 IOPS, 17.79 MiB/s [2024-12-06T12:21:14.449Z] 4605.00 IOPS, 17.99 MiB/s [2024-12-06T12:21:15.384Z] 4608.00 IOPS, 18.00 MiB/s [2024-12-06T12:21:16.316Z] 4640.00 IOPS, 18.12 MiB/s [2024-12-06T12:21:17.249Z] 4658.40 IOPS, 18.20 MiB/s [2024-12-06T12:21:18.183Z] 4661.00 IOPS, 18.21 MiB/s [2024-12-06T12:21:19.120Z] 4674.57 IOPS, 18.26 MiB/s [2024-12-06T12:21:20.056Z] 4684.25 IOPS, 18.30 MiB/s [2024-12-06T12:21:21.432Z] 4694.33 IOPS, 18.34 MiB/s [2024-12-06T12:21:21.432Z] 4702.60 IOPS, 18.37 MiB/s 00:13:34.774 Latency(us) 00:13:34.774 [2024-12-06T12:21:21.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.774 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:34.774 Verification LBA range: start 0x0 length 0x2000 00:13:34.774 TLSTESTn1 : 10.01 4708.79 18.39 0.00 0.00 27136.99 5034.36 21567.30 00:13:34.774 [2024-12-06T12:21:21.432Z] =================================================================================================================== 00:13:34.774 [2024-12-06T12:21:21.432Z] Total : 4708.79 18.39 0.00 0.00 27136.99 5034.36 21567.30 00:13:34.774 { 00:13:34.774 "results": [ 00:13:34.774 { 00:13:34.774 "job": "TLSTESTn1", 00:13:34.774 "core_mask": "0x4", 00:13:34.774 "workload": "verify", 00:13:34.774 "status": "finished", 00:13:34.774 "verify_range": { 00:13:34.774 "start": 0, 00:13:34.774 "length": 8192 00:13:34.774 }, 00:13:34.774 "queue_depth": 128, 00:13:34.774 "io_size": 4096, 00:13:34.774 "runtime": 10.013395, 00:13:34.774 "iops": 4708.792572349338, 00:13:34.774 "mibps": 18.393720985739602, 00:13:34.774 "io_failed": 0, 00:13:34.774 "io_timeout": 0, 00:13:34.774 "avg_latency_us": 27136.993119744882, 00:13:34.774 "min_latency_us": 5034.356363636363, 00:13:34.774 "max_latency_us": 21567.30181818182 00:13:34.774 } 00:13:34.774 ], 00:13:34.774 "core_count": 1 00:13:34.774 } 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:34.774 nvmf_trace.0 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72234 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72234 ']' 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72234 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72234 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:34.774 killing process with pid 72234 00:13:34.774 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72234' 00:13:34.774 Received shutdown signal, test time was about 10.000000 seconds 00:13:34.774 00:13:34.774 Latency(us) 00:13:34.774 [2024-12-06T12:21:21.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.774 [2024-12-06T12:21:21.432Z] =================================================================================================================== 00:13:34.774 [2024-12-06T12:21:21.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72234 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72234 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.775 rmmod nvme_tcp 00:13:34.775 rmmod nvme_fabrics 00:13:34.775 rmmod nvme_keyring 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72201 ']' 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72201 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72201 ']' 00:13:34.775 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72201 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72201 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:35.033 killing process with pid 72201 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72201' 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72201 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72201 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:35.033 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.20V 00:13:35.291 00:13:35.291 real 0m13.949s 00:13:35.291 user 0m19.551s 00:13:35.291 sys 0m5.544s 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:35.291 ************************************ 00:13:35.291 END TEST nvmf_fips 00:13:35.291 ************************************ 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.291 ************************************ 00:13:35.291 START TEST nvmf_control_msg_list 00:13:35.291 ************************************ 00:13:35.291 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:35.573 * Looking for test storage... 00:13:35.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.573 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.573 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.573 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.573 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.574 --rc genhtml_branch_coverage=1 00:13:35.574 --rc genhtml_function_coverage=1 00:13:35.574 --rc genhtml_legend=1 00:13:35.574 --rc geninfo_all_blocks=1 00:13:35.574 --rc geninfo_unexecuted_blocks=1 00:13:35.574 00:13:35.574 ' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.574 --rc genhtml_branch_coverage=1 00:13:35.574 --rc genhtml_function_coverage=1 00:13:35.574 --rc genhtml_legend=1 00:13:35.574 --rc geninfo_all_blocks=1 00:13:35.574 --rc geninfo_unexecuted_blocks=1 00:13:35.574 00:13:35.574 ' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.574 --rc genhtml_branch_coverage=1 00:13:35.574 --rc genhtml_function_coverage=1 00:13:35.574 --rc genhtml_legend=1 00:13:35.574 --rc geninfo_all_blocks=1 00:13:35.574 --rc geninfo_unexecuted_blocks=1 00:13:35.574 00:13:35.574 ' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.574 --rc genhtml_branch_coverage=1 00:13:35.574 --rc genhtml_function_coverage=1 00:13:35.574 --rc genhtml_legend=1 00:13:35.574 --rc geninfo_all_blocks=1 00:13:35.574 --rc geninfo_unexecuted_blocks=1 00:13:35.574 00:13:35.574 ' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.574 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.574 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:35.575 Cannot find device "nvmf_init_br" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:35.575 Cannot find device "nvmf_init_br2" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:35.575 Cannot find device "nvmf_tgt_br" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.575 Cannot find device "nvmf_tgt_br2" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:35.575 Cannot find device "nvmf_init_br" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:35.575 Cannot find device "nvmf_init_br2" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:35.575 Cannot find device "nvmf_tgt_br" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:35.575 Cannot find device "nvmf_tgt_br2" 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:13:35.575 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:35.834 Cannot find device "nvmf_br" 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:35.834 Cannot find device "nvmf_init_if" 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:35.834 Cannot find device "nvmf_init_if2" 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:35.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:13:35.834 00:13:35.834 --- 10.0.0.3 ping statistics --- 00:13:35.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.834 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:35.834 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:35.834 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:13:35.834 00:13:35.834 --- 10.0.0.4 ping statistics --- 00:13:35.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.834 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:13:35.834 00:13:35.834 --- 10.0.0.1 ping statistics --- 00:13:35.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.834 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:35.834 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:36.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:36.092 00:13:36.092 --- 10.0.0.2 ping statistics --- 00:13:36.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.093 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72617 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72617 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72617 ']' 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.093 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.093 [2024-12-06 12:21:22.563037] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:36.093 [2024-12-06 12:21:22.563113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.093 [2024-12-06 12:21:22.710905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.351 [2024-12-06 12:21:22.749507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.351 [2024-12-06 12:21:22.749562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.351 [2024-12-06 12:21:22.749576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.351 [2024-12-06 12:21:22.749586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.351 [2024-12-06 12:21:22.749595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.351 [2024-12-06 12:21:22.749950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.351 [2024-12-06 12:21:22.785641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 [2024-12-06 12:21:22.889918] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 Malloc0 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 [2024-12-06 12:21:22.925681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72636 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72637 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72638 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:36.351 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72636 00:13:36.609 [2024-12-06 12:21:23.103960] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:36.609 [2024-12-06 12:21:23.114322] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:36.609 [2024-12-06 12:21:23.114515] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:37.579 Initializing NVMe Controllers 00:13:37.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:37.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:13:37.579 Initialization complete. Launching workers. 00:13:37.579 ======================================================== 00:13:37.579 Latency(us) 00:13:37.579 Device Information : IOPS MiB/s Average min max 00:13:37.579 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3919.00 15.31 254.86 122.76 852.57 00:13:37.579 ======================================================== 00:13:37.579 Total : 3919.00 15.31 254.86 122.76 852.57 00:13:37.579 00:13:37.579 Initializing NVMe Controllers 00:13:37.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:37.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:13:37.579 Initialization complete. Launching workers. 00:13:37.579 ======================================================== 00:13:37.579 Latency(us) 00:13:37.579 Device Information : IOPS MiB/s Average min max 00:13:37.579 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3912.96 15.29 255.18 132.58 912.14 00:13:37.579 ======================================================== 00:13:37.579 Total : 3912.96 15.29 255.18 132.58 912.14 00:13:37.579 00:13:37.579 Initializing NVMe Controllers 00:13:37.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:37.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:13:37.579 Initialization complete. Launching workers. 00:13:37.579 ======================================================== 00:13:37.579 Latency(us) 00:13:37.579 Device Information : IOPS MiB/s Average min max 00:13:37.579 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3910.00 15.27 255.41 160.84 870.13 00:13:37.579 ======================================================== 00:13:37.579 Total : 3910.00 15.27 255.41 160.84 870.13 00:13:37.579 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72637 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72638 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:37.579 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:37.579 rmmod nvme_tcp 00:13:37.579 rmmod nvme_fabrics 00:13:37.849 rmmod nvme_keyring 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72617 ']' 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72617 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72617 ']' 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72617 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72617 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.849 killing process with pid 72617 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72617' 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72617 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72617 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:37.849 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:13:38.108 00:13:38.108 real 0m2.766s 00:13:38.108 user 0m4.683s 00:13:38.108 sys 0m1.287s 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:38.108 ************************************ 00:13:38.108 END TEST nvmf_control_msg_list 00:13:38.108 ************************************ 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.108 ************************************ 00:13:38.108 START TEST nvmf_wait_for_buf 00:13:38.108 ************************************ 00:13:38.108 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:38.369 * Looking for test storage... 00:13:38.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:38.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.369 --rc genhtml_branch_coverage=1 00:13:38.369 --rc genhtml_function_coverage=1 00:13:38.369 --rc genhtml_legend=1 00:13:38.369 --rc geninfo_all_blocks=1 00:13:38.369 --rc geninfo_unexecuted_blocks=1 00:13:38.369 00:13:38.369 ' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:38.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.369 --rc genhtml_branch_coverage=1 00:13:38.369 --rc genhtml_function_coverage=1 00:13:38.369 --rc genhtml_legend=1 00:13:38.369 --rc geninfo_all_blocks=1 00:13:38.369 --rc geninfo_unexecuted_blocks=1 00:13:38.369 00:13:38.369 ' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:38.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.369 --rc genhtml_branch_coverage=1 00:13:38.369 --rc genhtml_function_coverage=1 00:13:38.369 --rc genhtml_legend=1 00:13:38.369 --rc geninfo_all_blocks=1 00:13:38.369 --rc geninfo_unexecuted_blocks=1 00:13:38.369 00:13:38.369 ' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:38.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.369 --rc genhtml_branch_coverage=1 00:13:38.369 --rc genhtml_function_coverage=1 00:13:38.369 --rc genhtml_legend=1 00:13:38.369 --rc geninfo_all_blocks=1 00:13:38.369 --rc geninfo_unexecuted_blocks=1 00:13:38.369 00:13:38.369 ' 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.369 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:38.370 Cannot find device "nvmf_init_br" 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:38.370 Cannot find device "nvmf_init_br2" 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:38.370 Cannot find device "nvmf_tgt_br" 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:38.370 Cannot find device "nvmf_tgt_br2" 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:13:38.370 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:38.370 Cannot find device "nvmf_init_br" 00:13:38.370 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:13:38.370 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:38.370 Cannot find device "nvmf_init_br2" 00:13:38.370 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:13:38.370 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:38.636 Cannot find device "nvmf_tgt_br" 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:38.636 Cannot find device "nvmf_tgt_br2" 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:38.636 Cannot find device "nvmf_br" 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:38.636 Cannot find device "nvmf_init_if" 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:38.636 Cannot find device "nvmf_init_if2" 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:38.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:38.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:38.636 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:38.895 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:38.895 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:38.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:38.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:13:38.896 00:13:38.896 --- 10.0.0.3 ping statistics --- 00:13:38.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.896 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:38.896 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:38.896 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:13:38.896 00:13:38.896 --- 10.0.0.4 ping statistics --- 00:13:38.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.896 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:38.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:38.896 00:13:38.896 --- 10.0.0.1 ping statistics --- 00:13:38.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.896 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:38.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:13:38.896 00:13:38.896 --- 10.0.0.2 ping statistics --- 00:13:38.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.896 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=72874 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 72874 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 72874 ']' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.896 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:38.896 [2024-12-06 12:21:25.425131] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:38.896 [2024-12-06 12:21:25.425281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.154 [2024-12-06 12:21:25.568278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.154 [2024-12-06 12:21:25.593876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.154 [2024-12-06 12:21:25.593928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.154 [2024-12-06 12:21:25.593954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.154 [2024-12-06 12:21:25.593961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.154 [2024-12-06 12:21:25.593966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.154 [2024-12-06 12:21:25.594255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.720 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.720 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:13:39.720 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.721 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 [2024-12-06 12:21:26.399268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 Malloc0 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 [2024-12-06 12:21:26.443335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 [2024-12-06 12:21:26.471409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.979 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:40.237 [2024-12-06 12:21:26.668298] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:41.616 Initializing NVMe Controllers 00:13:41.616 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:41.616 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:13:41.616 Initialization complete. Launching workers. 00:13:41.616 ======================================================== 00:13:41.616 Latency(us) 00:13:41.616 Device Information : IOPS MiB/s Average min max 00:13:41.616 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.50 62.44 8008.36 6010.41 10993.01 00:13:41.616 ======================================================== 00:13:41.616 Total : 499.50 62.44 8008.36 6010.41 10993.01 00:13:41.616 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:41.616 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.616 rmmod nvme_tcp 00:13:41.616 rmmod nvme_fabrics 00:13:41.616 rmmod nvme_keyring 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 72874 ']' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 72874 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 72874 ']' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 72874 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72874 00:13:41.616 killing process with pid 72874 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72874' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 72874 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 72874 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:41.616 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:41.875 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:13:41.876 00:13:41.876 real 0m3.781s 00:13:41.876 user 0m3.265s 00:13:41.876 sys 0m0.764s 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.876 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:41.876 ************************************ 00:13:41.876 END TEST nvmf_wait_for_buf 00:13:41.876 ************************************ 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.136 ************************************ 00:13:42.136 START TEST nvmf_nsid 00:13:42.136 ************************************ 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:42.136 * Looking for test storage... 00:13:42.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.136 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.137 --rc genhtml_branch_coverage=1 00:13:42.137 --rc genhtml_function_coverage=1 00:13:42.137 --rc genhtml_legend=1 00:13:42.137 --rc geninfo_all_blocks=1 00:13:42.137 --rc geninfo_unexecuted_blocks=1 00:13:42.137 00:13:42.137 ' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.137 --rc genhtml_branch_coverage=1 00:13:42.137 --rc genhtml_function_coverage=1 00:13:42.137 --rc genhtml_legend=1 00:13:42.137 --rc geninfo_all_blocks=1 00:13:42.137 --rc geninfo_unexecuted_blocks=1 00:13:42.137 00:13:42.137 ' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.137 --rc genhtml_branch_coverage=1 00:13:42.137 --rc genhtml_function_coverage=1 00:13:42.137 --rc genhtml_legend=1 00:13:42.137 --rc geninfo_all_blocks=1 00:13:42.137 --rc geninfo_unexecuted_blocks=1 00:13:42.137 00:13:42.137 ' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.137 --rc genhtml_branch_coverage=1 00:13:42.137 --rc genhtml_function_coverage=1 00:13:42.137 --rc genhtml_legend=1 00:13:42.137 --rc geninfo_all_blocks=1 00:13:42.137 --rc geninfo_unexecuted_blocks=1 00:13:42.137 00:13:42.137 ' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.137 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:42.398 Cannot find device "nvmf_init_br" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:42.398 Cannot find device "nvmf_init_br2" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:42.398 Cannot find device "nvmf_tgt_br" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.398 Cannot find device "nvmf_tgt_br2" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:42.398 Cannot find device "nvmf_init_br" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:42.398 Cannot find device "nvmf_init_br2" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:42.398 Cannot find device "nvmf_tgt_br" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:42.398 Cannot find device "nvmf_tgt_br2" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:42.398 Cannot find device "nvmf_br" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:42.398 Cannot find device "nvmf_init_if" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:42.398 Cannot find device "nvmf_init_if2" 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.398 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.398 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.398 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.398 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:42.398 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:42.398 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:42.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:42.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:13:42.659 00:13:42.659 --- 10.0.0.3 ping statistics --- 00:13:42.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.659 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:42.659 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:42.659 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:13:42.659 00:13:42.659 --- 10.0.0.4 ping statistics --- 00:13:42.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.659 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:42.659 00:13:42.659 --- 10.0.0.1 ping statistics --- 00:13:42.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.659 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:42.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:42.659 00:13:42.659 --- 10.0.0.2 ping statistics --- 00:13:42.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.659 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73144 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73144 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73144 ']' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.659 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:42.659 [2024-12-06 12:21:29.287252] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:42.659 [2024-12-06 12:21:29.287334] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.919 [2024-12-06 12:21:29.432610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.919 [2024-12-06 12:21:29.459774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.919 [2024-12-06 12:21:29.459831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.919 [2024-12-06 12:21:29.459840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.919 [2024-12-06 12:21:29.459846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.919 [2024-12-06 12:21:29.459852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.919 [2024-12-06 12:21:29.460122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.919 [2024-12-06 12:21:29.487279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.919 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.919 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:42.919 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.919 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.919 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73167 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e01d950e-0a2e-41e8-a9ee-4e5d716306a9 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a41933f4-2ef6-4a24-a5b9-67c4b3a12fa7 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=85850e8c-fb1d-47f0-a35f-ed363056a31c 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 null0 00:13:43.178 null1 00:13:43.178 null2 00:13:43.178 [2024-12-06 12:21:29.637624] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.178 [2024-12-06 12:21:29.659706] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:43.178 [2024-12-06 12:21:29.659797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73167 ] 00:13:43.178 [2024-12-06 12:21:29.661729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73167 /var/tmp/tgt2.sock 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73167 ']' 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.178 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:43.178 [2024-12-06 12:21:29.814032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.438 [2024-12-06 12:21:29.853683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.438 [2024-12-06 12:21:29.900592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.438 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.438 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:43.438 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:13:44.007 [2024-12-06 12:21:30.443699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.007 [2024-12-06 12:21:30.459759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:13:44.007 nvme0n1 nvme0n2 00:13:44.007 nvme1n1 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:13:44.007 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e01d950e-0a2e-41e8-a9ee-4e5d716306a9 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e01d950e0a2e41e8a9ee4e5d716306a9 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E01D950E0A2E41E8A9EE4E5D716306A9 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E01D950E0A2E41E8A9EE4E5D716306A9 == \E\0\1\D\9\5\0\E\0\A\2\E\4\1\E\8\A\9\E\E\4\E\5\D\7\1\6\3\0\6\A\9 ]] 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a41933f4-2ef6-4a24-a5b9-67c4b3a12fa7 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a41933f42ef64a24a5b967c4b3a12fa7 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A41933F42EF64A24A5B967C4B3A12FA7 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A41933F42EF64A24A5B967C4B3A12FA7 == \A\4\1\9\3\3\F\4\2\E\F\6\4\A\2\4\A\5\B\9\6\7\C\4\B\3\A\1\2\F\A\7 ]] 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 85850e8c-fb1d-47f0-a35f-ed363056a31c 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=85850e8cfb1d47f0a35fed363056a31c 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 85850E8CFB1D47F0A35FED363056A31C 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 85850E8CFB1D47F0A35FED363056A31C == \8\5\8\5\0\E\8\C\F\B\1\D\4\7\F\0\A\3\5\F\E\D\3\6\3\0\5\6\A\3\1\C ]] 00:13:45.388 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73167 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73167 ']' 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73167 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73167 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:45.648 killing process with pid 73167 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73167' 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73167 00:13:45.648 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73167 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.908 rmmod nvme_tcp 00:13:45.908 rmmod nvme_fabrics 00:13:45.908 rmmod nvme_keyring 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73144 ']' 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73144 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73144 ']' 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73144 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73144 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73144' 00:13:45.908 killing process with pid 73144 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73144 00:13:45.908 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73144 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:46.167 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.168 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:13:46.427 00:13:46.427 real 0m4.292s 00:13:46.427 user 0m6.362s 00:13:46.427 sys 0m1.509s 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.427 ************************************ 00:13:46.427 END TEST nvmf_nsid 00:13:46.427 ************************************ 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:46.427 00:13:46.427 real 4m48.293s 00:13:46.427 user 10m3.584s 00:13:46.427 sys 1m3.372s 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.427 12:21:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.427 ************************************ 00:13:46.427 END TEST nvmf_target_extra 00:13:46.427 ************************************ 00:13:46.427 12:21:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:46.427 12:21:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.427 12:21:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.427 12:21:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.427 ************************************ 00:13:46.427 START TEST nvmf_host 00:13:46.427 ************************************ 00:13:46.427 12:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:46.427 * Looking for test storage... 00:13:46.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:46.427 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:46.427 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:13:46.427 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.689 --rc genhtml_branch_coverage=1 00:13:46.689 --rc genhtml_function_coverage=1 00:13:46.689 --rc genhtml_legend=1 00:13:46.689 --rc geninfo_all_blocks=1 00:13:46.689 --rc geninfo_unexecuted_blocks=1 00:13:46.689 00:13:46.689 ' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.689 --rc genhtml_branch_coverage=1 00:13:46.689 --rc genhtml_function_coverage=1 00:13:46.689 --rc genhtml_legend=1 00:13:46.689 --rc geninfo_all_blocks=1 00:13:46.689 --rc geninfo_unexecuted_blocks=1 00:13:46.689 00:13:46.689 ' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.689 --rc genhtml_branch_coverage=1 00:13:46.689 --rc genhtml_function_coverage=1 00:13:46.689 --rc genhtml_legend=1 00:13:46.689 --rc geninfo_all_blocks=1 00:13:46.689 --rc geninfo_unexecuted_blocks=1 00:13:46.689 00:13:46.689 ' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.689 --rc genhtml_branch_coverage=1 00:13:46.689 --rc genhtml_function_coverage=1 00:13:46.689 --rc genhtml_legend=1 00:13:46.689 --rc geninfo_all_blocks=1 00:13:46.689 --rc geninfo_unexecuted_blocks=1 00:13:46.689 00:13:46.689 ' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.689 12:21:33 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:46.690 ************************************ 00:13:46.690 START TEST nvmf_identify 00:13:46.690 ************************************ 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:46.690 * Looking for test storage... 00:13:46.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:46.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.690 --rc genhtml_branch_coverage=1 00:13:46.690 --rc genhtml_function_coverage=1 00:13:46.690 --rc genhtml_legend=1 00:13:46.690 --rc geninfo_all_blocks=1 00:13:46.690 --rc geninfo_unexecuted_blocks=1 00:13:46.690 00:13:46.690 ' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:46.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.690 --rc genhtml_branch_coverage=1 00:13:46.690 --rc genhtml_function_coverage=1 00:13:46.690 --rc genhtml_legend=1 00:13:46.690 --rc geninfo_all_blocks=1 00:13:46.690 --rc geninfo_unexecuted_blocks=1 00:13:46.690 00:13:46.690 ' 00:13:46.690 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:46.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.690 --rc genhtml_branch_coverage=1 00:13:46.690 --rc genhtml_function_coverage=1 00:13:46.690 --rc genhtml_legend=1 00:13:46.690 --rc geninfo_all_blocks=1 00:13:46.690 --rc geninfo_unexecuted_blocks=1 00:13:46.690 00:13:46.691 ' 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:46.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.691 --rc genhtml_branch_coverage=1 00:13:46.691 --rc genhtml_function_coverage=1 00:13:46.691 --rc genhtml_legend=1 00:13:46.691 --rc geninfo_all_blocks=1 00:13:46.691 --rc geninfo_unexecuted_blocks=1 00:13:46.691 00:13:46.691 ' 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.691 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:46.951 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:46.952 Cannot find device "nvmf_init_br" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:46.952 Cannot find device "nvmf_init_br2" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:46.952 Cannot find device "nvmf_tgt_br" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.952 Cannot find device "nvmf_tgt_br2" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:46.952 Cannot find device "nvmf_init_br" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:46.952 Cannot find device "nvmf_init_br2" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:46.952 Cannot find device "nvmf_tgt_br" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:46.952 Cannot find device "nvmf_tgt_br2" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:46.952 Cannot find device "nvmf_br" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:46.952 Cannot find device "nvmf_init_if" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:46.952 Cannot find device "nvmf_init_if2" 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:46.952 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:47.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:13:47.212 00:13:47.212 --- 10.0.0.3 ping statistics --- 00:13:47.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.212 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:47.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:47.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:13:47.212 00:13:47.212 --- 10.0.0.4 ping statistics --- 00:13:47.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.212 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:47.212 00:13:47.212 --- 10.0.0.1 ping statistics --- 00:13:47.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.212 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:47.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:13:47.212 00:13:47.212 --- 10.0.0.2 ping statistics --- 00:13:47.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.212 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73521 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73521 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73521 ']' 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.212 12:21:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.212 [2024-12-06 12:21:33.839991] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:47.213 [2024-12-06 12:21:33.840077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.472 [2024-12-06 12:21:33.986051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.472 [2024-12-06 12:21:34.015306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.472 [2024-12-06 12:21:34.015368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.472 [2024-12-06 12:21:34.015378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.472 [2024-12-06 12:21:34.015385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.472 [2024-12-06 12:21:34.015392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.472 [2024-12-06 12:21:34.016233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.472 [2024-12-06 12:21:34.016272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.472 [2024-12-06 12:21:34.016363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.472 [2024-12-06 12:21:34.016366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.472 [2024-12-06 12:21:34.044657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.472 [2024-12-06 12:21:34.107993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.472 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 Malloc0 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 [2024-12-06 12:21:34.209803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:47.732 [ 00:13:47.732 { 00:13:47.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.732 "subtype": "Discovery", 00:13:47.732 "listen_addresses": [ 00:13:47.732 { 00:13:47.732 "trtype": "TCP", 00:13:47.732 "adrfam": "IPv4", 00:13:47.732 "traddr": "10.0.0.3", 00:13:47.732 "trsvcid": "4420" 00:13:47.732 } 00:13:47.732 ], 00:13:47.732 "allow_any_host": true, 00:13:47.732 "hosts": [] 00:13:47.732 }, 00:13:47.732 { 00:13:47.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.732 "subtype": "NVMe", 00:13:47.732 "listen_addresses": [ 00:13:47.732 { 00:13:47.732 "trtype": "TCP", 00:13:47.732 "adrfam": "IPv4", 00:13:47.732 "traddr": "10.0.0.3", 00:13:47.732 "trsvcid": "4420" 00:13:47.732 } 00:13:47.732 ], 00:13:47.732 "allow_any_host": true, 00:13:47.732 "hosts": [], 00:13:47.732 "serial_number": "SPDK00000000000001", 00:13:47.732 "model_number": "SPDK bdev Controller", 00:13:47.732 "max_namespaces": 32, 00:13:47.732 "min_cntlid": 1, 00:13:47.732 "max_cntlid": 65519, 00:13:47.732 "namespaces": [ 00:13:47.732 { 00:13:47.732 "nsid": 1, 00:13:47.732 "bdev_name": "Malloc0", 00:13:47.732 "name": "Malloc0", 00:13:47.732 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:47.732 "eui64": "ABCDEF0123456789", 00:13:47.732 "uuid": "079133e1-7fe3-4c61-9484-4630ca5d2d86" 00:13:47.732 } 00:13:47.732 ] 00:13:47.732 } 00:13:47.732 ] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.732 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:47.732 [2024-12-06 12:21:34.263883] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:47.732 [2024-12-06 12:21:34.263951] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73543 ] 00:13:47.995 [2024-12-06 12:21:34.421864] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:13:47.995 [2024-12-06 12:21:34.421936] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:47.995 [2024-12-06 12:21:34.421944] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:47.995 [2024-12-06 12:21:34.421957] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:47.995 [2024-12-06 12:21:34.421968] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:47.995 [2024-12-06 12:21:34.422342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:13:47.995 [2024-12-06 12:21:34.422409] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15ca750 0 00:13:47.995 [2024-12-06 12:21:34.436237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:47.995 [2024-12-06 12:21:34.436260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:47.995 [2024-12-06 12:21:34.436282] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:47.995 [2024-12-06 12:21:34.436286] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:47.995 [2024-12-06 12:21:34.436315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.436322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.436326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.995 [2024-12-06 12:21:34.436338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:47.995 [2024-12-06 12:21:34.436367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.995 [2024-12-06 12:21:34.444323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.995 [2024-12-06 12:21:34.444360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.995 [2024-12-06 12:21:34.444382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.444387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.995 [2024-12-06 12:21:34.444399] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:47.995 [2024-12-06 12:21:34.444407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:13:47.995 [2024-12-06 12:21:34.444414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:13:47.995 [2024-12-06 12:21:34.444432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.444437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.444442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.995 [2024-12-06 12:21:34.444451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.995 [2024-12-06 12:21:34.444480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.995 [2024-12-06 12:21:34.444554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.995 [2024-12-06 12:21:34.444561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.995 [2024-12-06 12:21:34.444565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.444569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.995 [2024-12-06 12:21:34.444575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:13:47.995 [2024-12-06 12:21:34.444597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:13:47.995 [2024-12-06 12:21:34.444605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.444626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.995 [2024-12-06 12:21:34.444630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.995 [2024-12-06 12:21:34.444638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.995 [2024-12-06 12:21:34.444657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.995 [2024-12-06 12:21:34.444702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.995 [2024-12-06 12:21:34.444709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.995 [2024-12-06 12:21:34.444713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.996 [2024-12-06 12:21:34.444723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:13:47.996 [2024-12-06 12:21:34.444731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:47.996 [2024-12-06 12:21:34.444739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.996 [2024-12-06 12:21:34.444754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.996 [2024-12-06 12:21:34.444777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.996 [2024-12-06 12:21:34.444820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.996 [2024-12-06 12:21:34.444827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.996 [2024-12-06 12:21:34.444831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.996 [2024-12-06 12:21:34.444841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:47.996 [2024-12-06 12:21:34.444851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.996 [2024-12-06 12:21:34.444867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.996 [2024-12-06 12:21:34.444884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.996 [2024-12-06 12:21:34.444929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.996 [2024-12-06 12:21:34.444936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.996 [2024-12-06 12:21:34.444939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.444943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.996 [2024-12-06 12:21:34.444949] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:47.996 [2024-12-06 12:21:34.444954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:47.996 [2024-12-06 12:21:34.444962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:47.996 [2024-12-06 12:21:34.445073] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:13:47.996 [2024-12-06 12:21:34.445079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:47.996 [2024-12-06 12:21:34.445089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.996 [2024-12-06 12:21:34.445104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.996 [2024-12-06 12:21:34.445123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.996 [2024-12-06 12:21:34.445166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.996 [2024-12-06 12:21:34.445173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.996 [2024-12-06 12:21:34.445177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.996 [2024-12-06 12:21:34.445203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:47.996 [2024-12-06 12:21:34.445213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.996 [2024-12-06 12:21:34.445230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.996 [2024-12-06 12:21:34.445262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.996 [2024-12-06 12:21:34.445320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.996 [2024-12-06 12:21:34.445329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.996 [2024-12-06 12:21:34.445333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.996 [2024-12-06 12:21:34.445342] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:47.996 [2024-12-06 12:21:34.445348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:47.996 [2024-12-06 12:21:34.445356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:13:47.996 [2024-12-06 12:21:34.445366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:47.996 [2024-12-06 12:21:34.445377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.996 [2024-12-06 12:21:34.445390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.996 [2024-12-06 12:21:34.445410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.996 [2024-12-06 12:21:34.445498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.996 [2024-12-06 12:21:34.445505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.996 [2024-12-06 12:21:34.445509] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445513] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ca750): datao=0, datal=4096, cccid=0 00:13:47.996 [2024-12-06 12:21:34.445519] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162e740) on tqpair(0x15ca750): expected_datao=0, payload_size=4096 00:13:47.996 [2024-12-06 12:21:34.445524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445538] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.996 [2024-12-06 12:21:34.445553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.996 [2024-12-06 12:21:34.445571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.996 [2024-12-06 12:21:34.445575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.996 [2024-12-06 12:21:34.445584] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:13:47.996 [2024-12-06 12:21:34.445590] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:13:47.996 [2024-12-06 12:21:34.445594] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:13:47.996 [2024-12-06 12:21:34.445601] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:13:47.996 [2024-12-06 12:21:34.445606] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:13:47.996 [2024-12-06 12:21:34.445611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:13:47.997 [2024-12-06 12:21:34.445620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:47.997 [2024-12-06 12:21:34.445628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.445643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:47.997 [2024-12-06 12:21:34.445663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.997 [2024-12-06 12:21:34.445717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.997 [2024-12-06 12:21:34.445724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.997 [2024-12-06 12:21:34.445727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.997 [2024-12-06 12:21:34.445746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.445762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.997 [2024-12-06 12:21:34.445768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.445782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.997 [2024-12-06 12:21:34.445788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.445801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.997 [2024-12-06 12:21:34.445807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.445821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.997 [2024-12-06 12:21:34.445827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:47.997 [2024-12-06 12:21:34.445836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:47.997 [2024-12-06 12:21:34.445843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.445854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.997 [2024-12-06 12:21:34.445874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e740, cid 0, qid 0 00:13:47.997 [2024-12-06 12:21:34.445882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e8c0, cid 1, qid 0 00:13:47.997 [2024-12-06 12:21:34.445887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea40, cid 2, qid 0 00:13:47.997 [2024-12-06 12:21:34.445892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:47.997 [2024-12-06 12:21:34.445897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed40, cid 4, qid 0 00:13:47.997 [2024-12-06 12:21:34.445978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.997 [2024-12-06 12:21:34.445985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.997 [2024-12-06 12:21:34.445989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.445993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed40) on tqpair=0x15ca750 00:13:47.997 [2024-12-06 12:21:34.445999] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:13:47.997 [2024-12-06 12:21:34.446008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:13:47.997 [2024-12-06 12:21:34.446021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.446032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.997 [2024-12-06 12:21:34.446051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed40, cid 4, qid 0 00:13:47.997 [2024-12-06 12:21:34.446107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.997 [2024-12-06 12:21:34.446114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.997 [2024-12-06 12:21:34.446118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446121] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ca750): datao=0, datal=4096, cccid=4 00:13:47.997 [2024-12-06 12:21:34.446126] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ed40) on tqpair(0x15ca750): expected_datao=0, payload_size=4096 00:13:47.997 [2024-12-06 12:21:34.446131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446138] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446142] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.997 [2024-12-06 12:21:34.446157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.997 [2024-12-06 12:21:34.446160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed40) on tqpair=0x15ca750 00:13:47.997 [2024-12-06 12:21:34.446177] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:13:47.997 [2024-12-06 12:21:34.446232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.446247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.997 [2024-12-06 12:21:34.446255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15ca750) 00:13:47.997 [2024-12-06 12:21:34.446270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.997 [2024-12-06 12:21:34.446296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed40, cid 4, qid 0 00:13:47.997 [2024-12-06 12:21:34.446304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162eec0, cid 5, qid 0 00:13:47.997 [2024-12-06 12:21:34.446409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.997 [2024-12-06 12:21:34.446426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.997 [2024-12-06 12:21:34.446431] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446435] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ca750): datao=0, datal=1024, cccid=4 00:13:47.997 [2024-12-06 12:21:34.446441] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ed40) on tqpair(0x15ca750): expected_datao=0, payload_size=1024 00:13:47.997 [2024-12-06 12:21:34.446445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446453] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.997 [2024-12-06 12:21:34.446457] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.998 [2024-12-06 12:21:34.446469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.998 [2024-12-06 12:21:34.446473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162eec0) on tqpair=0x15ca750 00:13:47.998 [2024-12-06 12:21:34.446501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.998 [2024-12-06 12:21:34.446509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.998 [2024-12-06 12:21:34.446513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed40) on tqpair=0x15ca750 00:13:47.998 [2024-12-06 12:21:34.446530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ca750) 00:13:47.998 [2024-12-06 12:21:34.446543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.998 [2024-12-06 12:21:34.446567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed40, cid 4, qid 0 00:13:47.998 [2024-12-06 12:21:34.446647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.998 [2024-12-06 12:21:34.446654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.998 [2024-12-06 12:21:34.446658] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446662] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ca750): datao=0, datal=3072, cccid=4 00:13:47.998 [2024-12-06 12:21:34.446667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ed40) on tqpair(0x15ca750): expected_datao=0, payload_size=3072 00:13:47.998 [2024-12-06 12:21:34.446671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446678] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446682] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.998 [2024-12-06 12:21:34.446697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.998 [2024-12-06 12:21:34.446700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed40) on tqpair=0x15ca750 00:13:47.998 [2024-12-06 12:21:34.446714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15ca750) 00:13:47.998 [2024-12-06 12:21:34.446726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.998 [2024-12-06 12:21:34.446748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed40, cid 4, qid 0 00:13:47.998 [2024-12-06 12:21:34.446810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.998 [2024-12-06 12:21:34.446817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.998 [2024-12-06 12:21:34.446821] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.998 [2024-12-06 12:21:34.446824] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15ca750): datao=0, datal=8, cccid=4 00:13:47.998 [2024-12-06 12:21:34.446829] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ed40) on tqpair(0x15ca750): expected_datao=0, payload_size=8 00:13:47.998 ===================================================== 00:13:47.998 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:47.998 ===================================================== 00:13:47.998 Controller Capabilities/Features 00:13:47.998 ================================ 00:13:47.998 Vendor ID: 0000 00:13:47.998 Subsystem Vendor ID: 0000 00:13:47.998 Serial Number: .................... 00:13:47.998 Model Number: ........................................ 00:13:47.998 Firmware Version: 25.01 00:13:47.998 Recommended Arb Burst: 0 00:13:47.998 IEEE OUI Identifier: 00 00 00 00:13:47.998 Multi-path I/O 00:13:47.998 May have multiple subsystem ports: No 00:13:47.998 May have multiple controllers: No 00:13:47.998 Associated with SR-IOV VF: No 00:13:47.998 Max Data Transfer Size: 131072 00:13:47.998 Max Number of Namespaces: 0 00:13:47.998 Max Number of I/O Queues: 1024 00:13:47.998 NVMe Specification Version (VS): 1.3 00:13:47.998 NVMe Specification Version (Identify): 1.3 00:13:47.998 Maximum Queue Entries: 128 00:13:47.998 Contiguous Queues Required: Yes 00:13:47.998 Arbitration Mechanisms Supported 00:13:47.998 Weighted Round Robin: Not Supported 00:13:47.998 Vendor Specific: Not Supported 00:13:47.998 Reset Timeout: 15000 ms 00:13:47.998 Doorbell Stride: 4 bytes 00:13:47.998 NVM Subsystem Reset: Not Supported 00:13:47.998 Command Sets Supported 00:13:47.998 NVM Command Set: Supported 00:13:47.998 Boot Partition: Not Supported 00:13:47.998 Memory Page Size Minimum: 4096 bytes 00:13:47.998 Memory Page Size Maximum: 4096 bytes 00:13:47.998 Persistent Memory Region: Not Supported 00:13:47.998 Optional Asynchronous Events Supported 00:13:47.998 Namespace Attribute Notices: Not Supported 00:13:47.998 Firmware Activation Notices: Not Supported 00:13:47.998 ANA Change Notices: Not Supported 00:13:47.998 PLE Aggregate Log Change Notices: Not Supported 00:13:47.998 LBA Status Info Alert Notices: Not Supported 00:13:47.998 EGE Aggregate Log Change Notices: Not Supported 00:13:47.998 Normal NVM Subsystem Shutdown event: Not Supported 00:13:47.998 Zone Descriptor Change Notices: Not Supported 00:13:47.998 Discovery Log Change Notices: Supported 00:13:47.998 Controller Attributes 00:13:47.998 128-bit Host Identifier: Not Supported 00:13:47.998 Non-Operational Permissive Mode: Not Supported 00:13:47.998 NVM Sets: Not Supported 00:13:47.998 Read Recovery Levels: Not Supported 00:13:47.998 Endurance Groups: Not Supported 00:13:47.998 Predictable Latency Mode: Not Supported 00:13:47.998 Traffic Based Keep ALive: Not Supported 00:13:47.998 Namespace Granularity: Not Supported 00:13:47.998 SQ Associations: Not Supported 00:13:47.998 UUID List: Not Supported 00:13:47.998 Multi-Domain Subsystem: Not Supported 00:13:47.998 Fixed Capacity Management: Not Supported 00:13:47.998 Variable Capacity Management: Not Supported 00:13:47.998 Delete Endurance Group: Not Supported 00:13:47.998 Delete NVM Set: Not Supported 00:13:47.998 Extended LBA Formats Supported: Not Supported 00:13:47.998 Flexible Data Placement Supported: Not Supported 00:13:47.998 00:13:47.998 Controller Memory Buffer Support 00:13:47.998 ================================ 00:13:47.998 Supported: No 00:13:47.998 00:13:47.998 Persistent Memory Region Support 00:13:47.998 ================================ 00:13:47.998 Supported: No 00:13:47.998 00:13:47.998 Admin Command Set Attributes 00:13:47.998 ============================ 00:13:47.998 Security Send/Receive: Not Supported 00:13:47.998 Format NVM: Not Supported 00:13:47.998 Firmware Activate/Download: Not Supported 00:13:47.998 Namespace Management: Not Supported 00:13:47.998 Device Self-Test: Not Supported 00:13:47.998 Directives: Not Supported 00:13:47.998 NVMe-MI: Not Supported 00:13:47.999 Virtualization Management: Not Supported 00:13:47.999 Doorbell Buffer Config: Not Supported 00:13:47.999 Get LBA Status Capability: Not Supported 00:13:47.999 Command & Feature Lockdown Capability: Not Supported 00:13:47.999 Abort Command Limit: 1 00:13:47.999 Async Event Request Limit: 4 00:13:47.999 Number of Firmware Slots: N/A 00:13:47.999 Firmware Slot 1 Read-Only: N/A 00:13:47.999 Firmware Activation Without Reset: N/A 00:13:47.999 Multiple Update Detection Support: N/A 00:13:47.999 Firmware Update Granularity: No Information Provided 00:13:47.999 Per-Namespace SMART Log: No 00:13:47.999 Asymmetric Namespace Access Log Page: Not Supported 00:13:47.999 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:47.999 Command Effects Log Page: Not Supported 00:13:47.999 Get Log Page Extended Data: Supported 00:13:47.999 Telemetry Log Pages: Not Supported 00:13:47.999 Persistent Event Log Pages: Not Supported 00:13:47.999 Supported Log Pages Log Page: May Support 00:13:47.999 Commands Supported & Effects Log Page: Not Supported 00:13:47.999 Feature Identifiers & Effects Log Page:May Support 00:13:47.999 NVMe-MI Commands & Effects Log Page: May Support 00:13:47.999 Data Area 4 for Telemetry Log: Not Supported 00:13:47.999 Error Log Page Entries Supported: 128 00:13:47.999 Keep Alive: Not Supported 00:13:47.999 00:13:47.999 NVM Command Set Attributes 00:13:47.999 ========================== 00:13:47.999 Submission Queue Entry Size 00:13:47.999 Max: 1 00:13:47.999 Min: 1 00:13:47.999 Completion Queue Entry Size 00:13:47.999 Max: 1 00:13:47.999 Min: 1 00:13:47.999 Number of Namespaces: 0 00:13:47.999 Compare Command: Not Supported 00:13:47.999 Write Uncorrectable Command: Not Supported 00:13:47.999 Dataset Management Command: Not Supported 00:13:47.999 Write Zeroes Command: Not Supported 00:13:47.999 Set Features Save Field: Not Supported 00:13:47.999 Reservations: Not Supported 00:13:47.999 Timestamp: Not Supported 00:13:47.999 Copy: Not Supported 00:13:47.999 Volatile Write Cache: Not Present 00:13:47.999 Atomic Write Unit (Normal): 1 00:13:47.999 Atomic Write Unit (PFail): 1 00:13:47.999 Atomic Compare & Write Unit: 1 00:13:47.999 Fused Compare & Write: Supported 00:13:47.999 Scatter-Gather List 00:13:47.999 SGL Command Set: Supported 00:13:47.999 SGL Keyed: Supported 00:13:47.999 SGL Bit Bucket Descriptor: Not Supported 00:13:47.999 SGL Metadata Pointer: Not Supported 00:13:47.999 Oversized SGL: Not Supported 00:13:47.999 SGL Metadata Address: Not Supported 00:13:47.999 SGL Offset: Supported 00:13:47.999 Transport SGL Data Block: Not Supported 00:13:47.999 Replay Protected Memory Block: Not Supported 00:13:47.999 00:13:47.999 Firmware Slot Information 00:13:47.999 ========================= 00:13:47.999 Active slot: 0 00:13:47.999 00:13:47.999 00:13:47.999 Error Log 00:13:47.999 ========= 00:13:47.999 00:13:47.999 Active Namespaces 00:13:47.999 ================= 00:13:47.999 Discovery Log Page 00:13:47.999 ================== 00:13:47.999 Generation Counter: 2 00:13:47.999 Number of Records: 2 00:13:47.999 Record Format: 0 00:13:47.999 00:13:47.999 Discovery Log Entry 0 00:13:47.999 ---------------------- 00:13:47.999 Transport Type: 3 (TCP) 00:13:47.999 Address Family: 1 (IPv4) 00:13:47.999 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:47.999 Entry Flags: 00:13:47.999 Duplicate Returned Information: 1 00:13:47.999 Explicit Persistent Connection Support for Discovery: 1 00:13:47.999 Transport Requirements: 00:13:47.999 Secure Channel: Not Required 00:13:47.999 Port ID: 0 (0x0000) 00:13:47.999 Controller ID: 65535 (0xffff) 00:13:47.999 Admin Max SQ Size: 128 00:13:47.999 Transport Service Identifier: 4420 00:13:47.999 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:47.999 Transport Address: 10.0.0.3 00:13:47.999 Discovery Log Entry 1 00:13:47.999 ---------------------- 00:13:47.999 Transport Type: 3 (TCP) 00:13:47.999 Address Family: 1 (IPv4) 00:13:47.999 Subsystem Type: 2 (NVM Subsystem) 00:13:47.999 Entry Flags: 00:13:47.999 Duplicate Returned Information: 0 00:13:47.999 Explicit Persistent Connection Support for Discovery: 0 00:13:47.999 Transport Requirements: 00:13:47.999 Secure Channel: Not Required 00:13:47.999 Port ID: 0 (0x0000) 00:13:47.999 Controller ID: 65535 (0xffff) 00:13:47.999 Admin Max SQ Size: 128 00:13:47.999 Transport Service Identifier: 4420 00:13:47.999 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:47.999 Transport Address: 10.0.0.3 [2024-12-06 12:21:34.446834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.999 [2024-12-06 12:21:34.446841] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.999 [2024-12-06 12:21:34.446845] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.999 [2024-12-06 12:21:34.446859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.999 [2024-12-06 12:21:34.446867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.999 [2024-12-06 12:21:34.446870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.999 [2024-12-06 12:21:34.446874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed40) on tqpair=0x15ca750 00:13:47.999 [2024-12-06 12:21:34.446992] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:13:47.999 [2024-12-06 12:21:34.447008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e740) on tqpair=0x15ca750 00:13:47.999 [2024-12-06 12:21:34.447016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.999 [2024-12-06 12:21:34.447022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e8c0) on tqpair=0x15ca750 00:13:47.999 [2024-12-06 12:21:34.447026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.999 [2024-12-06 12:21:34.447031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea40) on tqpair=0x15ca750 00:13:47.999 [2024-12-06 12:21:34.447036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.999 [2024-12-06 12:21:34.447041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:47.999 [2024-12-06 12:21:34.447046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.999 [2024-12-06 12:21:34.447055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.999 [2024-12-06 12:21:34.447060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447374] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:13:48.000 [2024-12-06 12:21:34.447380] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:13:48.000 [2024-12-06 12:21:34.447392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.447907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.447914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.447918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.447932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.447940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.447947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.447964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.448010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.448017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.000 [2024-12-06 12:21:34.448021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.448025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.000 [2024-12-06 12:21:34.448035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.448039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.000 [2024-12-06 12:21:34.448043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.000 [2024-12-06 12:21:34.448050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.000 [2024-12-06 12:21:34.448067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.000 [2024-12-06 12:21:34.448110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.000 [2024-12-06 12:21:34.448117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.001 [2024-12-06 12:21:34.448120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.448124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.001 [2024-12-06 12:21:34.448134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.448139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.448143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.001 [2024-12-06 12:21:34.448150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.001 [2024-12-06 12:21:34.448167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.001 [2024-12-06 12:21:34.452258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.001 [2024-12-06 12:21:34.452278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.001 [2024-12-06 12:21:34.452300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.452305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.001 [2024-12-06 12:21:34.452320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.452326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.452330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15ca750) 00:13:48.001 [2024-12-06 12:21:34.452339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.001 [2024-12-06 12:21:34.452364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ebc0, cid 3, qid 0 00:13:48.001 [2024-12-06 12:21:34.452422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.001 [2024-12-06 12:21:34.452429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.001 [2024-12-06 12:21:34.452433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.001 [2024-12-06 12:21:34.452437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ebc0) on tqpair=0x15ca750 00:13:48.001 [2024-12-06 12:21:34.452446] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:13:48.001 00:13:48.001 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:48.001 [2024-12-06 12:21:34.491065] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:48.001 [2024-12-06 12:21:34.491114] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73549 ] 00:13:48.001 [2024-12-06 12:21:34.646663] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:13:48.001 [2024-12-06 12:21:34.646718] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:48.001 [2024-12-06 12:21:34.646725] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:48.001 [2024-12-06 12:21:34.646738] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:48.001 [2024-12-06 12:21:34.646748] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:48.001 [2024-12-06 12:21:34.647083] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:13:48.001 [2024-12-06 12:21:34.647145] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2165750 0 00:13:48.266 [2024-12-06 12:21:34.652290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:48.266 [2024-12-06 12:21:34.652315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:48.266 [2024-12-06 12:21:34.652322] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:48.266 [2024-12-06 12:21:34.652325] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:48.266 [2024-12-06 12:21:34.652354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.652360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.652364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.266 [2024-12-06 12:21:34.652376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:48.266 [2024-12-06 12:21:34.652407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.266 [2024-12-06 12:21:34.660346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.266 [2024-12-06 12:21:34.660369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.266 [2024-12-06 12:21:34.660374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.660379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.266 [2024-12-06 12:21:34.660392] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:48.266 [2024-12-06 12:21:34.660399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:13:48.266 [2024-12-06 12:21:34.660406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:13:48.266 [2024-12-06 12:21:34.660422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.660427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.660431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.266 [2024-12-06 12:21:34.660441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.266 [2024-12-06 12:21:34.660469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.266 [2024-12-06 12:21:34.660526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.266 [2024-12-06 12:21:34.660534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.266 [2024-12-06 12:21:34.660538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.660542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.266 [2024-12-06 12:21:34.660548] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:13:48.266 [2024-12-06 12:21:34.660556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:13:48.266 [2024-12-06 12:21:34.660564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.660568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.266 [2024-12-06 12:21:34.660588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.266 [2024-12-06 12:21:34.660596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.266 [2024-12-06 12:21:34.660631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.266 [2024-12-06 12:21:34.660690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.266 [2024-12-06 12:21:34.660697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.267 [2024-12-06 12:21:34.660701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.267 [2024-12-06 12:21:34.660711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:13:48.267 [2024-12-06 12:21:34.660720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:48.267 [2024-12-06 12:21:34.660728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.267 [2024-12-06 12:21:34.660744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.267 [2024-12-06 12:21:34.660763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.267 [2024-12-06 12:21:34.660808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.267 [2024-12-06 12:21:34.660816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.267 [2024-12-06 12:21:34.660819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.267 [2024-12-06 12:21:34.660830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:48.267 [2024-12-06 12:21:34.660840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.267 [2024-12-06 12:21:34.660856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.267 [2024-12-06 12:21:34.660875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.267 [2024-12-06 12:21:34.660921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.267 [2024-12-06 12:21:34.660928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.267 [2024-12-06 12:21:34.660932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.660952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.267 [2024-12-06 12:21:34.660972] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:48.267 [2024-12-06 12:21:34.660977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:48.267 [2024-12-06 12:21:34.660986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:48.267 [2024-12-06 12:21:34.661097] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:13:48.267 [2024-12-06 12:21:34.661102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:48.267 [2024-12-06 12:21:34.661111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.267 [2024-12-06 12:21:34.661127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.267 [2024-12-06 12:21:34.661146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.267 [2024-12-06 12:21:34.661193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.267 [2024-12-06 12:21:34.661200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.267 [2024-12-06 12:21:34.661204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.267 [2024-12-06 12:21:34.661214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:48.267 [2024-12-06 12:21:34.661225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.267 [2024-12-06 12:21:34.661241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.267 [2024-12-06 12:21:34.661313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.267 [2024-12-06 12:21:34.661375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.267 [2024-12-06 12:21:34.661383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.267 [2024-12-06 12:21:34.661386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.267 [2024-12-06 12:21:34.661395] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:48.267 [2024-12-06 12:21:34.661401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:48.267 [2024-12-06 12:21:34.661409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:13:48.267 [2024-12-06 12:21:34.661419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:48.267 [2024-12-06 12:21:34.661430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.267 [2024-12-06 12:21:34.661442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.267 [2024-12-06 12:21:34.661462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.267 [2024-12-06 12:21:34.661548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.267 [2024-12-06 12:21:34.661555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.267 [2024-12-06 12:21:34.661559] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661563] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=4096, cccid=0 00:13:48.267 [2024-12-06 12:21:34.661568] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c9740) on tqpair(0x2165750): expected_datao=0, payload_size=4096 00:13:48.267 [2024-12-06 12:21:34.661572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661580] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661584] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.267 [2024-12-06 12:21:34.661598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.267 [2024-12-06 12:21:34.661602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.267 [2024-12-06 12:21:34.661614] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:13:48.267 [2024-12-06 12:21:34.661620] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:13:48.267 [2024-12-06 12:21:34.661624] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:13:48.267 [2024-12-06 12:21:34.661629] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:13:48.267 [2024-12-06 12:21:34.661633] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:13:48.267 [2024-12-06 12:21:34.661638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:13:48.267 [2024-12-06 12:21:34.661647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:48.267 [2024-12-06 12:21:34.661655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.267 [2024-12-06 12:21:34.661678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.661685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.268 [2024-12-06 12:21:34.661705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.268 [2024-12-06 12:21:34.661752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.268 [2024-12-06 12:21:34.661759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.268 [2024-12-06 12:21:34.661762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.268 [2024-12-06 12:21:34.661778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.661793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.268 [2024-12-06 12:21:34.661799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.661812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.268 [2024-12-06 12:21:34.661818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.661831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.268 [2024-12-06 12:21:34.661837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.661850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.268 [2024-12-06 12:21:34.661855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.661863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.661870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.661874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.661881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.268 [2024-12-06 12:21:34.661901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9740, cid 0, qid 0 00:13:48.268 [2024-12-06 12:21:34.661908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c98c0, cid 1, qid 0 00:13:48.268 [2024-12-06 12:21:34.661913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9a40, cid 2, qid 0 00:13:48.268 [2024-12-06 12:21:34.661918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.268 [2024-12-06 12:21:34.661923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.268 [2024-12-06 12:21:34.662005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.268 [2024-12-06 12:21:34.662012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.268 [2024-12-06 12:21:34.662015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.268 [2024-12-06 12:21:34.662024] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:13:48.268 [2024-12-06 12:21:34.662033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.662071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.268 [2024-12-06 12:21:34.662089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.268 [2024-12-06 12:21:34.662140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.268 [2024-12-06 12:21:34.662147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.268 [2024-12-06 12:21:34.662151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.268 [2024-12-06 12:21:34.662228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.662262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.268 [2024-12-06 12:21:34.662283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.268 [2024-12-06 12:21:34.662338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.268 [2024-12-06 12:21:34.662345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.268 [2024-12-06 12:21:34.662349] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662353] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=4096, cccid=4 00:13:48.268 [2024-12-06 12:21:34.662357] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c9d40) on tqpair(0x2165750): expected_datao=0, payload_size=4096 00:13:48.268 [2024-12-06 12:21:34.662362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662369] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662373] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.268 [2024-12-06 12:21:34.662386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.268 [2024-12-06 12:21:34.662390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.268 [2024-12-06 12:21:34.662410] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:13:48.268 [2024-12-06 12:21:34.662424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:13:48.268 [2024-12-06 12:21:34.662444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.268 [2024-12-06 12:21:34.662448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.268 [2024-12-06 12:21:34.662455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.269 [2024-12-06 12:21:34.662476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.269 [2024-12-06 12:21:34.662539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.269 [2024-12-06 12:21:34.662546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.269 [2024-12-06 12:21:34.662549] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662553] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=4096, cccid=4 00:13:48.269 [2024-12-06 12:21:34.662557] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c9d40) on tqpair(0x2165750): expected_datao=0, payload_size=4096 00:13:48.269 [2024-12-06 12:21:34.662562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662569] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662572] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.269 [2024-12-06 12:21:34.662586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.269 [2024-12-06 12:21:34.662589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.269 [2024-12-06 12:21:34.662608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.269 [2024-12-06 12:21:34.662639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.269 [2024-12-06 12:21:34.662659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.269 [2024-12-06 12:21:34.662710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.269 [2024-12-06 12:21:34.662717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.269 [2024-12-06 12:21:34.662720] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662724] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=4096, cccid=4 00:13:48.269 [2024-12-06 12:21:34.662728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c9d40) on tqpair(0x2165750): expected_datao=0, payload_size=4096 00:13:48.269 [2024-12-06 12:21:34.662733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662743] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.269 [2024-12-06 12:21:34.662757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.269 [2024-12-06 12:21:34.662761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.269 [2024-12-06 12:21:34.662773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662815] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:13:48.269 [2024-12-06 12:21:34.662820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:13:48.269 [2024-12-06 12:21:34.662825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:13:48.269 [2024-12-06 12:21:34.662839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.269 [2024-12-06 12:21:34.662851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.269 [2024-12-06 12:21:34.662858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2165750) 00:13:48.269 [2024-12-06 12:21:34.662872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.269 [2024-12-06 12:21:34.662895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.269 [2024-12-06 12:21:34.662903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9ec0, cid 5, qid 0 00:13:48.269 [2024-12-06 12:21:34.662959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.269 [2024-12-06 12:21:34.662965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.269 [2024-12-06 12:21:34.662969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.269 [2024-12-06 12:21:34.662980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.269 [2024-12-06 12:21:34.662986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.269 [2024-12-06 12:21:34.662989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.662993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9ec0) on tqpair=0x2165750 00:13:48.269 [2024-12-06 12:21:34.663003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.663007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2165750) 00:13:48.269 [2024-12-06 12:21:34.663015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.269 [2024-12-06 12:21:34.663033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9ec0, cid 5, qid 0 00:13:48.269 [2024-12-06 12:21:34.663073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.269 [2024-12-06 12:21:34.663079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.269 [2024-12-06 12:21:34.663083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.663087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9ec0) on tqpair=0x2165750 00:13:48.269 [2024-12-06 12:21:34.663097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.663101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2165750) 00:13:48.269 [2024-12-06 12:21:34.663108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.269 [2024-12-06 12:21:34.663125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9ec0, cid 5, qid 0 00:13:48.269 [2024-12-06 12:21:34.663195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.269 [2024-12-06 12:21:34.663205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.269 [2024-12-06 12:21:34.663209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.663213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9ec0) on tqpair=0x2165750 00:13:48.269 [2024-12-06 12:21:34.663232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.269 [2024-12-06 12:21:34.663254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2165750) 00:13:48.270 [2024-12-06 12:21:34.663261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.270 [2024-12-06 12:21:34.663283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9ec0, cid 5, qid 0 00:13:48.270 [2024-12-06 12:21:34.663335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.270 [2024-12-06 12:21:34.663342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.270 [2024-12-06 12:21:34.663346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9ec0) on tqpair=0x2165750 00:13:48.270 [2024-12-06 12:21:34.663369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2165750) 00:13:48.270 [2024-12-06 12:21:34.663383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.270 [2024-12-06 12:21:34.663391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2165750) 00:13:48.270 [2024-12-06 12:21:34.663402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.270 [2024-12-06 12:21:34.663409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2165750) 00:13:48.270 [2024-12-06 12:21:34.663420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.270 [2024-12-06 12:21:34.663431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2165750) 00:13:48.270 [2024-12-06 12:21:34.663442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.270 [2024-12-06 12:21:34.663464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9ec0, cid 5, qid 0 00:13:48.270 [2024-12-06 12:21:34.663473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9d40, cid 4, qid 0 00:13:48.270 [2024-12-06 12:21:34.663478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ca040, cid 6, qid 0 00:13:48.270 [2024-12-06 12:21:34.663483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ca1c0, cid 7, qid 0 00:13:48.270 [2024-12-06 12:21:34.663638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.270 [2024-12-06 12:21:34.663645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.270 [2024-12-06 12:21:34.663649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663652] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=8192, cccid=5 00:13:48.270 [2024-12-06 12:21:34.663657] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c9ec0) on tqpair(0x2165750): expected_datao=0, payload_size=8192 00:13:48.270 [2024-12-06 12:21:34.663661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663676] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663681] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.270 [2024-12-06 12:21:34.663692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.270 [2024-12-06 12:21:34.663696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663700] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=512, cccid=4 00:13:48.270 [2024-12-06 12:21:34.663704] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c9d40) on tqpair(0x2165750): expected_datao=0, payload_size=512 00:13:48.270 [2024-12-06 12:21:34.663709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663718] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.270 [2024-12-06 12:21:34.663729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.270 [2024-12-06 12:21:34.663733] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663736] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=512, cccid=6 00:13:48.270 [2024-12-06 12:21:34.663740] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ca040) on tqpair(0x2165750): expected_datao=0, payload_size=512 00:13:48.270 [2024-12-06 12:21:34.663745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663751] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663754] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.270 [2024-12-06 12:21:34.663765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.270 [2024-12-06 12:21:34.663769] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663772] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2165750): datao=0, datal=4096, cccid=7 00:13:48.270 [2024-12-06 12:21:34.663776] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ca1c0) on tqpair(0x2165750): expected_datao=0, payload_size=4096 00:13:48.270 [2024-12-06 12:21:34.663780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663787] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663790] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.270 [2024-12-06 12:21:34.663804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.270 [2024-12-06 12:21:34.663807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9ec0) on tqpair=0x2165750 00:13:48.270 [2024-12-06 12:21:34.663826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.270 [2024-12-06 12:21:34.663832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.270 [2024-12-06 12:21:34.663836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.270 [2024-12-06 12:21:34.663840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9d40) on tqpair=0x2165750 00:13:48.270 ===================================================== 00:13:48.270 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.270 ===================================================== 00:13:48.270 Controller Capabilities/Features 00:13:48.270 ================================ 00:13:48.270 Vendor ID: 8086 00:13:48.270 Subsystem Vendor ID: 8086 00:13:48.270 Serial Number: SPDK00000000000001 00:13:48.270 Model Number: SPDK bdev Controller 00:13:48.270 Firmware Version: 25.01 00:13:48.270 Recommended Arb Burst: 6 00:13:48.270 IEEE OUI Identifier: e4 d2 5c 00:13:48.270 Multi-path I/O 00:13:48.270 May have multiple subsystem ports: Yes 00:13:48.270 May have multiple controllers: Yes 00:13:48.270 Associated with SR-IOV VF: No 00:13:48.270 Max Data Transfer Size: 131072 00:13:48.270 Max Number of Namespaces: 32 00:13:48.270 Max Number of I/O Queues: 127 00:13:48.270 NVMe Specification Version (VS): 1.3 00:13:48.270 NVMe Specification Version (Identify): 1.3 00:13:48.270 Maximum Queue Entries: 128 00:13:48.270 Contiguous Queues Required: Yes 00:13:48.270 Arbitration Mechanisms Supported 00:13:48.270 Weighted Round Robin: Not Supported 00:13:48.270 Vendor Specific: Not Supported 00:13:48.270 Reset Timeout: 15000 ms 00:13:48.270 Doorbell Stride: 4 bytes 00:13:48.271 NVM Subsystem Reset: Not Supported 00:13:48.271 Command Sets Supported 00:13:48.271 NVM Command Set: Supported 00:13:48.271 Boot Partition: Not Supported 00:13:48.271 Memory Page Size Minimum: 4096 bytes 00:13:48.271 Memory Page Size Maximum: 4096 bytes 00:13:48.271 Persistent Memory Region: Not Supported 00:13:48.271 Optional Asynchronous Events Supported 00:13:48.271 Namespace Attribute Notices: Supported 00:13:48.271 Firmware Activation Notices: Not Supported 00:13:48.271 ANA Change Notices: Not Supported 00:13:48.271 PLE Aggregate Log Change Notices: Not Supported 00:13:48.271 LBA Status Info Alert Notices: Not Supported 00:13:48.271 EGE Aggregate Log Change Notices: Not Supported 00:13:48.271 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.271 Zone Descriptor Change Notices: Not Supported 00:13:48.271 Discovery Log Change Notices: Not Supported 00:13:48.271 Controller Attributes 00:13:48.271 128-bit Host Identifier: Supported 00:13:48.271 Non-Operational Permissive Mode: Not Supported 00:13:48.271 NVM Sets: Not Supported 00:13:48.271 Read Recovery Levels: Not Supported 00:13:48.271 Endurance Groups: Not Supported 00:13:48.271 Predictable Latency Mode: Not Supported 00:13:48.271 Traffic Based Keep ALive: Not Supported 00:13:48.271 Namespace Granularity: Not Supported 00:13:48.271 SQ Associations: Not Supported 00:13:48.271 UUID List: Not Supported 00:13:48.271 Multi-Domain Subsystem: Not Supported 00:13:48.271 Fixed Capacity Management: Not Supported 00:13:48.271 Variable Capacity Management: Not Supported 00:13:48.271 Delete Endurance Group: Not Supported 00:13:48.271 Delete NVM Set: Not Supported 00:13:48.271 Extended LBA Formats Supported: Not Supported 00:13:48.271 Flexible Data Placement Supported: Not Supported 00:13:48.271 00:13:48.271 Controller Memory Buffer Support 00:13:48.271 ================================ 00:13:48.271 Supported: No 00:13:48.271 00:13:48.271 Persistent Memory Region Support 00:13:48.271 ================================ 00:13:48.271 Supported: No 00:13:48.271 00:13:48.271 Admin Command Set Attributes 00:13:48.271 ============================ 00:13:48.271 Security Send/Receive: Not Supported 00:13:48.271 Format NVM: Not Supported 00:13:48.271 Firmware Activate/Download: Not Supported 00:13:48.271 Namespace Management: Not Supported 00:13:48.271 Device Self-Test: Not Supported 00:13:48.271 Directives: Not Supported 00:13:48.271 NVMe-MI: Not Supported 00:13:48.271 Virtualization Management: Not Supported 00:13:48.271 Doorbell Buffer Config: Not Supported 00:13:48.271 Get LBA Status Capability: Not Supported 00:13:48.271 Command & Feature Lockdown Capability: Not Supported 00:13:48.271 Abort Command Limit: 4 00:13:48.271 Async Event Request Limit: 4 00:13:48.271 Number of Firmware Slots: N/A 00:13:48.271 Firmware Slot 1 Read-Only: N/A 00:13:48.271 Firmware Activation Without Reset: [2024-12-06 12:21:34.663851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.271 [2024-12-06 12:21:34.663857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.271 [2024-12-06 12:21:34.663861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.271 [2024-12-06 12:21:34.663865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ca040) on tqpair=0x2165750 00:13:48.271 [2024-12-06 12:21:34.663872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.271 [2024-12-06 12:21:34.663878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.271 [2024-12-06 12:21:34.663881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.271 [2024-12-06 12:21:34.663885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ca1c0) on tqpair=0x2165750 00:13:48.271 N/A 00:13:48.271 Multiple Update Detection Support: N/A 00:13:48.271 Firmware Update Granularity: No Information Provided 00:13:48.271 Per-Namespace SMART Log: No 00:13:48.271 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.271 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:48.271 Command Effects Log Page: Supported 00:13:48.271 Get Log Page Extended Data: Supported 00:13:48.271 Telemetry Log Pages: Not Supported 00:13:48.271 Persistent Event Log Pages: Not Supported 00:13:48.271 Supported Log Pages Log Page: May Support 00:13:48.271 Commands Supported & Effects Log Page: Not Supported 00:13:48.271 Feature Identifiers & Effects Log Page:May Support 00:13:48.271 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.271 Data Area 4 for Telemetry Log: Not Supported 00:13:48.271 Error Log Page Entries Supported: 128 00:13:48.271 Keep Alive: Supported 00:13:48.271 Keep Alive Granularity: 10000 ms 00:13:48.271 00:13:48.271 NVM Command Set Attributes 00:13:48.271 ========================== 00:13:48.271 Submission Queue Entry Size 00:13:48.271 Max: 64 00:13:48.271 Min: 64 00:13:48.271 Completion Queue Entry Size 00:13:48.271 Max: 16 00:13:48.271 Min: 16 00:13:48.271 Number of Namespaces: 32 00:13:48.271 Compare Command: Supported 00:13:48.271 Write Uncorrectable Command: Not Supported 00:13:48.271 Dataset Management Command: Supported 00:13:48.271 Write Zeroes Command: Supported 00:13:48.271 Set Features Save Field: Not Supported 00:13:48.271 Reservations: Supported 00:13:48.271 Timestamp: Not Supported 00:13:48.271 Copy: Supported 00:13:48.271 Volatile Write Cache: Present 00:13:48.271 Atomic Write Unit (Normal): 1 00:13:48.271 Atomic Write Unit (PFail): 1 00:13:48.271 Atomic Compare & Write Unit: 1 00:13:48.271 Fused Compare & Write: Supported 00:13:48.271 Scatter-Gather List 00:13:48.271 SGL Command Set: Supported 00:13:48.271 SGL Keyed: Supported 00:13:48.271 SGL Bit Bucket Descriptor: Not Supported 00:13:48.271 SGL Metadata Pointer: Not Supported 00:13:48.271 Oversized SGL: Not Supported 00:13:48.271 SGL Metadata Address: Not Supported 00:13:48.271 SGL Offset: Supported 00:13:48.271 Transport SGL Data Block: Not Supported 00:13:48.271 Replay Protected Memory Block: Not Supported 00:13:48.271 00:13:48.271 Firmware Slot Information 00:13:48.271 ========================= 00:13:48.271 Active slot: 1 00:13:48.271 Slot 1 Firmware Revision: 25.01 00:13:48.271 00:13:48.271 00:13:48.271 Commands Supported and Effects 00:13:48.271 ============================== 00:13:48.271 Admin Commands 00:13:48.271 -------------- 00:13:48.271 Get Log Page (02h): Supported 00:13:48.271 Identify (06h): Supported 00:13:48.271 Abort (08h): Supported 00:13:48.271 Set Features (09h): Supported 00:13:48.271 Get Features (0Ah): Supported 00:13:48.272 Asynchronous Event Request (0Ch): Supported 00:13:48.272 Keep Alive (18h): Supported 00:13:48.272 I/O Commands 00:13:48.272 ------------ 00:13:48.272 Flush (00h): Supported LBA-Change 00:13:48.272 Write (01h): Supported LBA-Change 00:13:48.272 Read (02h): Supported 00:13:48.272 Compare (05h): Supported 00:13:48.272 Write Zeroes (08h): Supported LBA-Change 00:13:48.272 Dataset Management (09h): Supported LBA-Change 00:13:48.272 Copy (19h): Supported LBA-Change 00:13:48.272 00:13:48.272 Error Log 00:13:48.272 ========= 00:13:48.272 00:13:48.272 Arbitration 00:13:48.272 =========== 00:13:48.272 Arbitration Burst: 1 00:13:48.272 00:13:48.272 Power Management 00:13:48.272 ================ 00:13:48.272 Number of Power States: 1 00:13:48.272 Current Power State: Power State #0 00:13:48.272 Power State #0: 00:13:48.272 Max Power: 0.00 W 00:13:48.272 Non-Operational State: Operational 00:13:48.272 Entry Latency: Not Reported 00:13:48.272 Exit Latency: Not Reported 00:13:48.272 Relative Read Throughput: 0 00:13:48.272 Relative Read Latency: 0 00:13:48.272 Relative Write Throughput: 0 00:13:48.272 Relative Write Latency: 0 00:13:48.272 Idle Power: Not Reported 00:13:48.272 Active Power: Not Reported 00:13:48.272 Non-Operational Permissive Mode: Not Supported 00:13:48.272 00:13:48.272 Health Information 00:13:48.272 ================== 00:13:48.272 Critical Warnings: 00:13:48.272 Available Spare Space: OK 00:13:48.272 Temperature: OK 00:13:48.272 Device Reliability: OK 00:13:48.272 Read Only: No 00:13:48.272 Volatile Memory Backup: OK 00:13:48.272 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:48.272 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:48.272 Available Spare: 0% 00:13:48.272 Available Spare Threshold: 0% 00:13:48.272 Life Percentage Used:[2024-12-06 12:21:34.663982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.663989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2165750) 00:13:48.272 [2024-12-06 12:21:34.663996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.272 [2024-12-06 12:21:34.664019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ca1c0, cid 7, qid 0 00:13:48.272 [2024-12-06 12:21:34.664065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.272 [2024-12-06 12:21:34.664073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.272 [2024-12-06 12:21:34.664076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.664080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ca1c0) on tqpair=0x2165750 00:13:48.272 [2024-12-06 12:21:34.664116] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:13:48.272 [2024-12-06 12:21:34.664126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9740) on tqpair=0x2165750 00:13:48.272 [2024-12-06 12:21:34.664133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.272 [2024-12-06 12:21:34.664138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c98c0) on tqpair=0x2165750 00:13:48.272 [2024-12-06 12:21:34.664143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.272 [2024-12-06 12:21:34.664148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9a40) on tqpair=0x2165750 00:13:48.272 [2024-12-06 12:21:34.664153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.272 [2024-12-06 12:21:34.664158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.272 [2024-12-06 12:21:34.664162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.272 [2024-12-06 12:21:34.664171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.664175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.664179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.272 [2024-12-06 12:21:34.664186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.272 [2024-12-06 12:21:34.664208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.272 [2024-12-06 12:21:34.664281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.272 [2024-12-06 12:21:34.664290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.272 [2024-12-06 12:21:34.664293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.664298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.272 [2024-12-06 12:21:34.664305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.664310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.272 [2024-12-06 12:21:34.664313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.272 [2024-12-06 12:21:34.664321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.272 [2024-12-06 12:21:34.664345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.272 [2024-12-06 12:21:34.664402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.272 [2024-12-06 12:21:34.664409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.272 [2024-12-06 12:21:34.664413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.664422] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:13:48.273 [2024-12-06 12:21:34.664427] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:13:48.273 [2024-12-06 12:21:34.664437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.664453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.664470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.664513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.664520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.664523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.664538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.664554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.664571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.664624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.664631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.664635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.664649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.664664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.664680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.664723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.664730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.664733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.664747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.664762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.664779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.664824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.664831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.664834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.664849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.664864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.664881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.664918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.664924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.664928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.664942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.664950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.664957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.664973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.665016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.665023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.665026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.665040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.665055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.665072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.665112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.665118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.665122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.665136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.665151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.665168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.273 [2024-12-06 12:21:34.665217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.273 [2024-12-06 12:21:34.665226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.273 [2024-12-06 12:21:34.665230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.273 [2024-12-06 12:21:34.665244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.273 [2024-12-06 12:21:34.665252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.273 [2024-12-06 12:21:34.665259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.273 [2024-12-06 12:21:34.665278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.665907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.665914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.665918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.665931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.665940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.665947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.665964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.666007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.666014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.666018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.666022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.666032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.666036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.666040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.666047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.666064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.666101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.274 [2024-12-06 12:21:34.666108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.274 [2024-12-06 12:21:34.666111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.666115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.274 [2024-12-06 12:21:34.666125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.666130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.274 [2024-12-06 12:21:34.666133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.274 [2024-12-06 12:21:34.666140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.274 [2024-12-06 12:21:34.666157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.274 [2024-12-06 12:21:34.666212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.666900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.666904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.666919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.666927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.666934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.666951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.275 [2024-12-06 12:21:34.666994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.275 [2024-12-06 12:21:34.667000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.275 [2024-12-06 12:21:34.667004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.667008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.275 [2024-12-06 12:21:34.667017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.667022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.275 [2024-12-06 12:21:34.667026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.275 [2024-12-06 12:21:34.667033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.275 [2024-12-06 12:21:34.667049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.667901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.667905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.667919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.667927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.276 [2024-12-06 12:21:34.667934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.276 [2024-12-06 12:21:34.667951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.276 [2024-12-06 12:21:34.667994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.276 [2024-12-06 12:21:34.668001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.276 [2024-12-06 12:21:34.668004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.668008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.276 [2024-12-06 12:21:34.668018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.668023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.276 [2024-12-06 12:21:34.668026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.277 [2024-12-06 12:21:34.668033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.277 [2024-12-06 12:21:34.668050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.277 [2024-12-06 12:21:34.668094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.277 [2024-12-06 12:21:34.668100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.277 [2024-12-06 12:21:34.668104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.668108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.277 [2024-12-06 12:21:34.668118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.668123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.668126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.277 [2024-12-06 12:21:34.668133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.277 [2024-12-06 12:21:34.668150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.277 [2024-12-06 12:21:34.668187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.277 [2024-12-06 12:21:34.668194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.277 [2024-12-06 12:21:34.668197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.668201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.277 [2024-12-06 12:21:34.668212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.668216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.668220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2165750) 00:13:48.277 [2024-12-06 12:21:34.672258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.277 [2024-12-06 12:21:34.672290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c9bc0, cid 3, qid 0 00:13:48.277 [2024-12-06 12:21:34.672333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.277 [2024-12-06 12:21:34.672340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.277 [2024-12-06 12:21:34.672344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.277 [2024-12-06 12:21:34.672348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c9bc0) on tqpair=0x2165750 00:13:48.277 [2024-12-06 12:21:34.672356] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:13:48.277 0% 00:13:48.277 Data Units Read: 0 00:13:48.277 Data Units Written: 0 00:13:48.277 Host Read Commands: 0 00:13:48.277 Host Write Commands: 0 00:13:48.277 Controller Busy Time: 0 minutes 00:13:48.277 Power Cycles: 0 00:13:48.277 Power On Hours: 0 hours 00:13:48.277 Unsafe Shutdowns: 0 00:13:48.277 Unrecoverable Media Errors: 0 00:13:48.277 Lifetime Error Log Entries: 0 00:13:48.277 Warning Temperature Time: 0 minutes 00:13:48.277 Critical Temperature Time: 0 minutes 00:13:48.277 00:13:48.277 Number of Queues 00:13:48.277 ================ 00:13:48.277 Number of I/O Submission Queues: 127 00:13:48.277 Number of I/O Completion Queues: 127 00:13:48.277 00:13:48.277 Active Namespaces 00:13:48.277 ================= 00:13:48.277 Namespace ID:1 00:13:48.277 Error Recovery Timeout: Unlimited 00:13:48.277 Command Set Identifier: NVM (00h) 00:13:48.277 Deallocate: Supported 00:13:48.277 Deallocated/Unwritten Error: Not Supported 00:13:48.277 Deallocated Read Value: Unknown 00:13:48.277 Deallocate in Write Zeroes: Not Supported 00:13:48.277 Deallocated Guard Field: 0xFFFF 00:13:48.277 Flush: Supported 00:13:48.277 Reservation: Supported 00:13:48.277 Namespace Sharing Capabilities: Multiple Controllers 00:13:48.277 Size (in LBAs): 131072 (0GiB) 00:13:48.277 Capacity (in LBAs): 131072 (0GiB) 00:13:48.277 Utilization (in LBAs): 131072 (0GiB) 00:13:48.277 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:48.277 EUI64: ABCDEF0123456789 00:13:48.277 UUID: 079133e1-7fe3-4c61-9484-4630ca5d2d86 00:13:48.277 Thin Provisioning: Not Supported 00:13:48.277 Per-NS Atomic Units: Yes 00:13:48.277 Atomic Boundary Size (Normal): 0 00:13:48.277 Atomic Boundary Size (PFail): 0 00:13:48.277 Atomic Boundary Offset: 0 00:13:48.277 Maximum Single Source Range Length: 65535 00:13:48.277 Maximum Copy Length: 65535 00:13:48.277 Maximum Source Range Count: 1 00:13:48.277 NGUID/EUI64 Never Reused: No 00:13:48.277 Namespace Write Protected: No 00:13:48.277 Number of LBA Formats: 1 00:13:48.277 Current LBA Format: LBA Format #00 00:13:48.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.277 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.277 rmmod nvme_tcp 00:13:48.277 rmmod nvme_fabrics 00:13:48.277 rmmod nvme_keyring 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73521 ']' 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73521 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73521 ']' 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73521 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73521 00:13:48.277 killing process with pid 73521 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73521' 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73521 00:13:48.277 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73521 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:48.537 12:21:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.537 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:13:48.796 00:13:48.796 real 0m2.059s 00:13:48.796 user 0m4.131s 00:13:48.796 sys 0m0.674s 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.796 ************************************ 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:48.796 END TEST nvmf_identify 00:13:48.796 ************************************ 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:48.796 ************************************ 00:13:48.796 START TEST nvmf_perf 00:13:48.796 ************************************ 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:48.796 * Looking for test storage... 00:13:48.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.796 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.055 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.056 --rc genhtml_branch_coverage=1 00:13:49.056 --rc genhtml_function_coverage=1 00:13:49.056 --rc genhtml_legend=1 00:13:49.056 --rc geninfo_all_blocks=1 00:13:49.056 --rc geninfo_unexecuted_blocks=1 00:13:49.056 00:13:49.056 ' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.056 --rc genhtml_branch_coverage=1 00:13:49.056 --rc genhtml_function_coverage=1 00:13:49.056 --rc genhtml_legend=1 00:13:49.056 --rc geninfo_all_blocks=1 00:13:49.056 --rc geninfo_unexecuted_blocks=1 00:13:49.056 00:13:49.056 ' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.056 --rc genhtml_branch_coverage=1 00:13:49.056 --rc genhtml_function_coverage=1 00:13:49.056 --rc genhtml_legend=1 00:13:49.056 --rc geninfo_all_blocks=1 00:13:49.056 --rc geninfo_unexecuted_blocks=1 00:13:49.056 00:13:49.056 ' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.056 --rc genhtml_branch_coverage=1 00:13:49.056 --rc genhtml_function_coverage=1 00:13:49.056 --rc genhtml_legend=1 00:13:49.056 --rc geninfo_all_blocks=1 00:13:49.056 --rc geninfo_unexecuted_blocks=1 00:13:49.056 00:13:49.056 ' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.056 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:49.056 Cannot find device "nvmf_init_br" 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:49.056 Cannot find device "nvmf_init_br2" 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:49.056 Cannot find device "nvmf_tgt_br" 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:13:49.056 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.057 Cannot find device "nvmf_tgt_br2" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:49.057 Cannot find device "nvmf_init_br" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:49.057 Cannot find device "nvmf_init_br2" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:49.057 Cannot find device "nvmf_tgt_br" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:49.057 Cannot find device "nvmf_tgt_br2" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:49.057 Cannot find device "nvmf_br" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:49.057 Cannot find device "nvmf_init_if" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:49.057 Cannot find device "nvmf_init_if2" 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.057 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:49.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:13:49.316 00:13:49.316 --- 10.0.0.3 ping statistics --- 00:13:49.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.316 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:49.316 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:49.316 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:13:49.316 00:13:49.316 --- 10.0.0.4 ping statistics --- 00:13:49.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.316 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:49.316 00:13:49.316 --- 10.0.0.1 ping statistics --- 00:13:49.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.316 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:49.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:49.316 00:13:49.316 --- 10.0.0.2 ping statistics --- 00:13:49.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.316 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=73770 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 73770 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 73770 ']' 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.316 12:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:49.575 [2024-12-06 12:21:35.989559] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:49.575 [2024-12-06 12:21:35.989816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.575 [2024-12-06 12:21:36.135155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.575 [2024-12-06 12:21:36.175139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.575 [2024-12-06 12:21:36.175251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.575 [2024-12-06 12:21:36.175267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.575 [2024-12-06 12:21:36.175278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.575 [2024-12-06 12:21:36.175286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.575 [2024-12-06 12:21:36.176233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.575 [2024-12-06 12:21:36.177021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.575 [2024-12-06 12:21:36.177252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.575 [2024-12-06 12:21:36.177256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.575 [2024-12-06 12:21:36.212312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:49.834 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:50.402 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:50.402 12:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:50.402 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:50.402 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.969 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:50.969 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:50.969 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:50.969 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:50.969 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:50.969 [2024-12-06 12:21:37.620447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.228 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:51.487 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:51.487 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.745 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:51.745 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:52.003 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:52.003 [2024-12-06 12:21:38.653775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:52.262 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:52.521 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:52.521 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:52.521 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:52.521 12:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:53.457 Initializing NVMe Controllers 00:13:53.457 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:53.457 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:53.457 Initialization complete. Launching workers. 00:13:53.457 ======================================================== 00:13:53.457 Latency(us) 00:13:53.457 Device Information : IOPS MiB/s Average min max 00:13:53.457 PCIE (0000:00:10.0) NSID 1 from core 0: 21802.11 85.16 1466.80 390.81 8176.84 00:13:53.457 ======================================================== 00:13:53.457 Total : 21802.11 85.16 1466.80 390.81 8176.84 00:13:53.457 00:13:53.457 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:54.834 Initializing NVMe Controllers 00:13:54.834 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.834 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:54.834 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:54.834 Initialization complete. Launching workers. 00:13:54.834 ======================================================== 00:13:54.834 Latency(us) 00:13:54.834 Device Information : IOPS MiB/s Average min max 00:13:54.834 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3987.98 15.58 249.41 96.57 5228.24 00:13:54.834 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.44 6991.40 12016.37 00:13:54.834 ======================================================== 00:13:54.834 Total : 4111.98 16.06 487.01 96.57 12016.37 00:13:54.834 00:13:54.834 12:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:56.208 Initializing NVMe Controllers 00:13:56.208 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.208 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:56.208 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:56.208 Initialization complete. Launching workers. 00:13:56.208 ======================================================== 00:13:56.208 Latency(us) 00:13:56.208 Device Information : IOPS MiB/s Average min max 00:13:56.208 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9425.76 36.82 3395.40 472.11 7576.74 00:13:56.208 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3999.57 15.62 8018.07 6061.93 12038.29 00:13:56.208 ======================================================== 00:13:56.208 Total : 13425.33 52.44 4772.55 472.11 12038.29 00:13:56.208 00:13:56.208 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:56.208 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:58.782 Initializing NVMe Controllers 00:13:58.782 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.782 Controller IO queue size 128, less than required. 00:13:58.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.782 Controller IO queue size 128, less than required. 00:13:58.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.783 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.783 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.783 Initialization complete. Launching workers. 00:13:58.783 ======================================================== 00:13:58.783 Latency(us) 00:13:58.783 Device Information : IOPS MiB/s Average min max 00:13:58.783 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2022.96 505.74 64074.44 31885.33 111093.14 00:13:58.783 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 676.99 169.25 199286.12 63900.02 312751.71 00:13:58.783 ======================================================== 00:13:58.783 Total : 2699.95 674.99 97977.52 31885.33 312751.71 00:13:58.783 00:13:58.783 12:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:13:59.041 Initializing NVMe Controllers 00:13:59.041 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.041 Controller IO queue size 128, less than required. 00:13:59.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.041 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:59.041 Controller IO queue size 128, less than required. 00:13:59.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.041 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:59.041 WARNING: Some requested NVMe devices were skipped 00:13:59.041 No valid NVMe controllers or AIO or URING devices found 00:13:59.041 12:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:01.572 Initializing NVMe Controllers 00:14:01.572 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.572 Controller IO queue size 128, less than required. 00:14:01.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.572 Controller IO queue size 128, less than required. 00:14:01.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.572 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.572 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:01.572 Initialization complete. Launching workers. 00:14:01.573 00:14:01.573 ==================== 00:14:01.573 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:01.573 TCP transport: 00:14:01.573 polls: 12114 00:14:01.573 idle_polls: 6273 00:14:01.573 sock_completions: 5841 00:14:01.573 nvme_completions: 6327 00:14:01.573 submitted_requests: 9536 00:14:01.573 queued_requests: 1 00:14:01.573 00:14:01.573 ==================== 00:14:01.573 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:01.573 TCP transport: 00:14:01.573 polls: 14854 00:14:01.573 idle_polls: 9872 00:14:01.573 sock_completions: 4982 00:14:01.573 nvme_completions: 7149 00:14:01.573 submitted_requests: 10720 00:14:01.573 queued_requests: 1 00:14:01.573 ======================================================== 00:14:01.573 Latency(us) 00:14:01.573 Device Information : IOPS MiB/s Average min max 00:14:01.573 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1578.31 394.58 82856.01 32518.86 313885.71 00:14:01.573 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1783.39 445.85 72544.27 31130.76 120450.22 00:14:01.573 ======================================================== 00:14:01.573 Total : 3361.70 840.42 77385.60 31130.76 313885.71 00:14:01.573 00:14:01.573 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:01.831 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:02.090 rmmod nvme_tcp 00:14:02.090 rmmod nvme_fabrics 00:14:02.090 rmmod nvme_keyring 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 73770 ']' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 73770 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 73770 ']' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 73770 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73770 00:14:02.090 killing process with pid 73770 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73770' 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 73770 00:14:02.090 12:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 73770 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:02.658 00:14:02.658 real 0m13.994s 00:14:02.658 user 0m50.581s 00:14:02.658 sys 0m3.896s 00:14:02.658 ************************************ 00:14:02.658 END TEST nvmf_perf 00:14:02.658 ************************************ 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.658 12:21:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:02.919 ************************************ 00:14:02.919 START TEST nvmf_fio_host 00:14:02.919 ************************************ 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:02.919 * Looking for test storage... 00:14:02.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.919 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.920 --rc genhtml_branch_coverage=1 00:14:02.920 --rc genhtml_function_coverage=1 00:14:02.920 --rc genhtml_legend=1 00:14:02.920 --rc geninfo_all_blocks=1 00:14:02.920 --rc geninfo_unexecuted_blocks=1 00:14:02.920 00:14:02.920 ' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.920 --rc genhtml_branch_coverage=1 00:14:02.920 --rc genhtml_function_coverage=1 00:14:02.920 --rc genhtml_legend=1 00:14:02.920 --rc geninfo_all_blocks=1 00:14:02.920 --rc geninfo_unexecuted_blocks=1 00:14:02.920 00:14:02.920 ' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.920 --rc genhtml_branch_coverage=1 00:14:02.920 --rc genhtml_function_coverage=1 00:14:02.920 --rc genhtml_legend=1 00:14:02.920 --rc geninfo_all_blocks=1 00:14:02.920 --rc geninfo_unexecuted_blocks=1 00:14:02.920 00:14:02.920 ' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.920 --rc genhtml_branch_coverage=1 00:14:02.920 --rc genhtml_function_coverage=1 00:14:02.920 --rc genhtml_legend=1 00:14:02.920 --rc geninfo_all_blocks=1 00:14:02.920 --rc geninfo_unexecuted_blocks=1 00:14:02.920 00:14:02.920 ' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.920 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:02.920 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:02.921 Cannot find device "nvmf_init_br" 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:02.921 Cannot find device "nvmf_init_br2" 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:02.921 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:03.180 Cannot find device "nvmf_tgt_br" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.180 Cannot find device "nvmf_tgt_br2" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:03.180 Cannot find device "nvmf_init_br" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:03.180 Cannot find device "nvmf_init_br2" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:03.180 Cannot find device "nvmf_tgt_br" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:03.180 Cannot find device "nvmf_tgt_br2" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:03.180 Cannot find device "nvmf_br" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:03.180 Cannot find device "nvmf_init_if" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:03.180 Cannot find device "nvmf_init_if2" 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:03.180 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:03.181 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:03.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:03.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:14:03.440 00:14:03.440 --- 10.0.0.3 ping statistics --- 00:14:03.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.440 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:03.440 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:03.440 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:14:03.440 00:14:03.440 --- 10.0.0.4 ping statistics --- 00:14:03.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.440 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:03.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:03.440 00:14:03.440 --- 10.0.0.1 ping statistics --- 00:14:03.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.440 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:03.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:03.440 00:14:03.440 --- 10.0.0.2 ping statistics --- 00:14:03.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.440 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74216 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74216 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74216 ']' 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.440 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.441 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.441 12:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:03.441 [2024-12-06 12:21:50.018826] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:03.441 [2024-12-06 12:21:50.018947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.699 [2024-12-06 12:21:50.172563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.699 [2024-12-06 12:21:50.212625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.699 [2024-12-06 12:21:50.212693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.699 [2024-12-06 12:21:50.212708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.699 [2024-12-06 12:21:50.212719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.699 [2024-12-06 12:21:50.212727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.699 [2024-12-06 12:21:50.213697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.699 [2024-12-06 12:21:50.214212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.699 [2024-12-06 12:21:50.214374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.699 [2024-12-06 12:21:50.214475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.699 [2024-12-06 12:21:50.250998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.699 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.699 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:14:03.699 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.958 [2024-12-06 12:21:50.595876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.217 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:04.217 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.217 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:04.217 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:04.476 Malloc1 00:14:04.476 12:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.476 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:05.056 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:05.056 [2024-12-06 12:21:51.700763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:05.329 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:05.588 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:05.588 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:05.588 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:05.588 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:05.588 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:05.588 fio-3.35 00:14:05.588 Starting 1 thread 00:14:08.121 00:14:08.121 test: (groupid=0, jobs=1): err= 0: pid=74292: Fri Dec 6 12:21:54 2024 00:14:08.121 read: IOPS=9467, BW=37.0MiB/s (38.8MB/s)(74.2MiB/2007msec) 00:14:08.121 slat (nsec): min=1796, max=1837.2k, avg=2392.22, stdev=13695.59 00:14:08.121 clat (usec): min=2547, max=12510, avg=7037.47, stdev=584.76 00:14:08.121 lat (usec): min=2604, max=12512, avg=7039.86, stdev=584.60 00:14:08.121 clat percentiles (usec): 00:14:08.121 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:14:08.121 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:14:08.121 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:14:08.121 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[11469], 99.95th=[11600], 00:14:08.121 | 99.99th=[12518] 00:14:08.121 bw ( KiB/s): min=36710, max=38904, per=99.98%, avg=37863.50, stdev=898.28, samples=4 00:14:08.121 iops : min= 9177, max= 9726, avg=9465.75, stdev=224.78, samples=4 00:14:08.121 write: IOPS=9472, BW=37.0MiB/s (38.8MB/s)(74.3MiB/2007msec); 0 zone resets 00:14:08.121 slat (nsec): min=1899, max=246454, avg=2400.20, stdev=2316.70 00:14:08.121 clat (usec): min=2406, max=12163, avg=6416.11, stdev=532.37 00:14:08.121 lat (usec): min=2421, max=12165, avg=6418.51, stdev=532.30 00:14:08.121 clat percentiles (usec): 00:14:08.121 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:14:08.121 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:14:08.121 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7242], 00:14:08.121 | 99.00th=[ 8029], 99.50th=[ 8717], 99.90th=[10683], 99.95th=[11600], 00:14:08.121 | 99.99th=[12125] 00:14:08.121 bw ( KiB/s): min=37564, max=38016, per=99.95%, avg=37871.00, stdev=213.38, samples=4 00:14:08.121 iops : min= 9391, max= 9504, avg=9467.75, stdev=53.34, samples=4 00:14:08.121 lat (msec) : 4=0.08%, 10=99.69%, 20=0.23% 00:14:08.121 cpu : usr=70.34%, sys=22.48%, ctx=4, majf=0, minf=7 00:14:08.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:08.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.121 issued rwts: total=19001,19012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.121 00:14:08.121 Run status group 0 (all jobs): 00:14:08.121 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.2MiB (77.8MB), run=2007-2007msec 00:14:08.121 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.3MiB (77.9MB), run=2007-2007msec 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:08.121 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:08.121 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:08.121 fio-3.35 00:14:08.121 Starting 1 thread 00:14:10.655 00:14:10.655 test: (groupid=0, jobs=1): err= 0: pid=74335: Fri Dec 6 12:21:56 2024 00:14:10.655 read: IOPS=8828, BW=138MiB/s (145MB/s)(277MiB/2006msec) 00:14:10.655 slat (usec): min=2, max=126, avg= 3.58, stdev= 2.45 00:14:10.655 clat (usec): min=1782, max=19029, avg=8052.25, stdev=2499.43 00:14:10.655 lat (usec): min=1785, max=19032, avg=8055.83, stdev=2499.53 00:14:10.655 clat percentiles (usec): 00:14:10.655 | 1.00th=[ 3851], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 5800], 00:14:10.655 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:14:10.655 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[11338], 95.00th=[12911], 00:14:10.655 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16909], 99.95th=[17957], 00:14:10.655 | 99.99th=[19006] 00:14:10.655 bw ( KiB/s): min=64480, max=78912, per=50.89%, avg=71880.00, stdev=5987.59, samples=4 00:14:10.655 iops : min= 4030, max= 4932, avg=4492.50, stdev=374.22, samples=4 00:14:10.655 write: IOPS=5174, BW=80.9MiB/s (84.8MB/s)(146MiB/1807msec); 0 zone resets 00:14:10.655 slat (usec): min=31, max=683, avg=36.67, stdev=12.10 00:14:10.655 clat (usec): min=3828, max=18452, avg=11448.85, stdev=2185.96 00:14:10.655 lat (usec): min=3860, max=18487, avg=11485.53, stdev=2187.92 00:14:10.655 clat percentiles (usec): 00:14:10.655 | 1.00th=[ 7570], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:14:10.655 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:14:10.655 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14615], 95.00th=[15533], 00:14:10.655 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:14:10.655 | 99.99th=[18482] 00:14:10.655 bw ( KiB/s): min=66912, max=82272, per=90.35%, avg=74808.00, stdev=6329.39, samples=4 00:14:10.655 iops : min= 4182, max= 5142, avg=4675.50, stdev=395.59, samples=4 00:14:10.655 lat (msec) : 2=0.02%, 4=1.02%, 10=61.18%, 20=37.78% 00:14:10.655 cpu : usr=82.24%, sys=13.12%, ctx=37, majf=0, minf=14 00:14:10.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:10.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.656 issued rwts: total=17709,9351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.656 00:14:10.656 Run status group 0 (all jobs): 00:14:10.656 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=277MiB (290MB), run=2006-2006msec 00:14:10.656 WRITE: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=146MiB (153MB), run=1807-1807msec 00:14:10.656 12:21:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.656 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.656 rmmod nvme_tcp 00:14:10.916 rmmod nvme_fabrics 00:14:10.916 rmmod nvme_keyring 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74216 ']' 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74216 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74216 ']' 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74216 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74216 00:14:10.916 killing process with pid 74216 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74216' 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74216 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74216 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:10.916 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:10.917 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:14:11.177 00:14:11.177 real 0m8.442s 00:14:11.177 user 0m33.644s 00:14:11.177 sys 0m2.301s 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:11.177 ************************************ 00:14:11.177 END TEST nvmf_fio_host 00:14:11.177 ************************************ 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:11.177 ************************************ 00:14:11.177 START TEST nvmf_failover 00:14:11.177 ************************************ 00:14:11.177 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:11.437 * Looking for test storage... 00:14:11.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.437 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.438 --rc genhtml_branch_coverage=1 00:14:11.438 --rc genhtml_function_coverage=1 00:14:11.438 --rc genhtml_legend=1 00:14:11.438 --rc geninfo_all_blocks=1 00:14:11.438 --rc geninfo_unexecuted_blocks=1 00:14:11.438 00:14:11.438 ' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.438 --rc genhtml_branch_coverage=1 00:14:11.438 --rc genhtml_function_coverage=1 00:14:11.438 --rc genhtml_legend=1 00:14:11.438 --rc geninfo_all_blocks=1 00:14:11.438 --rc geninfo_unexecuted_blocks=1 00:14:11.438 00:14:11.438 ' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.438 --rc genhtml_branch_coverage=1 00:14:11.438 --rc genhtml_function_coverage=1 00:14:11.438 --rc genhtml_legend=1 00:14:11.438 --rc geninfo_all_blocks=1 00:14:11.438 --rc geninfo_unexecuted_blocks=1 00:14:11.438 00:14:11.438 ' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.438 --rc genhtml_branch_coverage=1 00:14:11.438 --rc genhtml_function_coverage=1 00:14:11.438 --rc genhtml_legend=1 00:14:11.438 --rc geninfo_all_blocks=1 00:14:11.438 --rc geninfo_unexecuted_blocks=1 00:14:11.438 00:14:11.438 ' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.438 12:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.438 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:11.439 Cannot find device "nvmf_init_br" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:11.439 Cannot find device "nvmf_init_br2" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:11.439 Cannot find device "nvmf_tgt_br" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.439 Cannot find device "nvmf_tgt_br2" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:11.439 Cannot find device "nvmf_init_br" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:11.439 Cannot find device "nvmf_init_br2" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:11.439 Cannot find device "nvmf_tgt_br" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:11.439 Cannot find device "nvmf_tgt_br2" 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:14:11.439 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:11.698 Cannot find device "nvmf_br" 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:11.698 Cannot find device "nvmf_init_if" 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:11.698 Cannot find device "nvmf_init_if2" 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:11.698 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.699 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:11.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:11.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:11.958 00:14:11.958 --- 10.0.0.3 ping statistics --- 00:14:11.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.958 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:11.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:11.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:11.958 00:14:11.958 --- 10.0.0.4 ping statistics --- 00:14:11.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.958 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:11.958 00:14:11.958 --- 10.0.0.1 ping statistics --- 00:14:11.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.958 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:11.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:11.958 00:14:11.958 --- 10.0.0.2 ping statistics --- 00:14:11.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.958 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74605 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74605 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74605 ']' 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.958 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:11.958 [2024-12-06 12:21:58.485006] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:11.958 [2024-12-06 12:21:58.485068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.218 [2024-12-06 12:21:58.634674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.218 [2024-12-06 12:21:58.673219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.218 [2024-12-06 12:21:58.673276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.218 [2024-12-06 12:21:58.673289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.218 [2024-12-06 12:21:58.673300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.218 [2024-12-06 12:21:58.673309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.218 [2024-12-06 12:21:58.674133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.218 [2024-12-06 12:21:58.674252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.218 [2024-12-06 12:21:58.674259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.218 [2024-12-06 12:21:58.708066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:13.156 [2024-12-06 12:21:59.731970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.156 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:13.415 Malloc0 00:14:13.415 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.675 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.934 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:14.194 [2024-12-06 12:22:00.713522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:14.194 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:14.454 [2024-12-06 12:22:01.005732] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:14.454 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:14.712 [2024-12-06 12:22:01.233847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74658 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74658 /var/tmp/bdevperf.sock 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74658 ']' 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.712 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:14.971 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.971 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:14.971 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:15.229 NVMe0n1 00:14:15.488 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:15.746 00:14:15.746 12:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74674 00:14:15.746 12:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:15.746 12:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.683 12:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:16.941 12:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:20.223 12:22:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:20.223 00:14:20.223 12:22:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:20.480 [2024-12-06 12:22:07.078864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60d930 is same with the state(6) to be set 00:14:20.480 [2024-12-06 12:22:07.078927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60d930 is same with the state(6) to be set 00:14:20.480 [2024-12-06 12:22:07.078954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60d930 is same with the state(6) to be set 00:14:20.480 12:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:23.759 12:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:23.759 [2024-12-06 12:22:10.369737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.759 12:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:25.135 12:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:25.135 [2024-12-06 12:22:11.652253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b640 is same with the state(6) to be set 00:14:25.135 [2024-12-06 12:22:11.652312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b640 is same with the state(6) to be set 00:14:25.135 [2024-12-06 12:22:11.652339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b640 is same with the state(6) to be set 00:14:25.135 [2024-12-06 12:22:11.652347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60b640 is same with the state(6) to be set 00:14:25.135 12:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74674 00:14:31.711 { 00:14:31.711 "results": [ 00:14:31.711 { 00:14:31.711 "job": "NVMe0n1", 00:14:31.711 "core_mask": "0x1", 00:14:31.711 "workload": "verify", 00:14:31.711 "status": "finished", 00:14:31.711 "verify_range": { 00:14:31.711 "start": 0, 00:14:31.711 "length": 16384 00:14:31.711 }, 00:14:31.711 "queue_depth": 128, 00:14:31.711 "io_size": 4096, 00:14:31.711 "runtime": 15.008139, 00:14:31.711 "iops": 10084.661396059832, 00:14:31.711 "mibps": 39.39320857835872, 00:14:31.711 "io_failed": 3437, 00:14:31.711 "io_timeout": 0, 00:14:31.711 "avg_latency_us": 12381.895444907703, 00:14:31.711 "min_latency_us": 569.7163636363637, 00:14:31.711 "max_latency_us": 16443.578181818182 00:14:31.711 } 00:14:31.711 ], 00:14:31.711 "core_count": 1 00:14:31.711 } 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74658 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74658 ']' 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74658 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74658 00:14:31.711 killing process with pid 74658 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74658' 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74658 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74658 00:14:31.711 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:31.711 [2024-12-06 12:22:01.300091] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:31.711 [2024-12-06 12:22:01.300198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74658 ] 00:14:31.711 [2024-12-06 12:22:01.439104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.711 [2024-12-06 12:22:01.468627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.711 [2024-12-06 12:22:01.496464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.711 Running I/O for 15 seconds... 00:14:31.711 7829.00 IOPS, 30.58 MiB/s [2024-12-06T12:22:18.369Z] [2024-12-06 12:22:03.440710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.440977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.440990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.441002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.441016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.441028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.441042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.441079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.441095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.441107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.441121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.441134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.441148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.711 [2024-12-06 12:22:03.441160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.711 [2024-12-06 12:22:03.441174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.441981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.441994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.442020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.442046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.442072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.442098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.442130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.712 [2024-12-06 12:22:03.442159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.712 [2024-12-06 12:22:03.442184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.442782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.442980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.442994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.713 [2024-12-06 12:22:03.443007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.713 [2024-12-06 12:22:03.443336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.713 [2024-12-06 12:22:03.443348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.443971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.443985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.714 [2024-12-06 12:22:03.444405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.714 [2024-12-06 12:22:03.444420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:03.444432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:03.444447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc989c0 is same with the state(6) to be set 00:14:31.715 [2024-12-06 12:22:03.444462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.715 [2024-12-06 12:22:03.444472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.715 [2024-12-06 12:22:03.444481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71376 len:8 PRP1 0x0 PRP2 0x0 00:14:31.715 [2024-12-06 12:22:03.444495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:03.444550] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:31.715 [2024-12-06 12:22:03.444605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.715 [2024-12-06 12:22:03.444625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:03.444640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.715 [2024-12-06 12:22:03.444652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:03.444666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.715 [2024-12-06 12:22:03.444678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:03.444691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.715 [2024-12-06 12:22:03.444702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:03.444715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:14:31.715 [2024-12-06 12:22:03.444752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27c60 (9): Bad file descriptor 00:14:31.715 [2024-12-06 12:22:03.448393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:31.715 [2024-12-06 12:22:03.474941] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:14:31.715 8722.00 IOPS, 34.07 MiB/s [2024-12-06T12:22:18.373Z] 9313.33 IOPS, 36.38 MiB/s [2024-12-06T12:22:18.373Z] 9613.00 IOPS, 37.55 MiB/s [2024-12-06T12:22:18.373Z] [2024-12-06 12:22:07.079541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.079607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.079712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.079740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.079767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.079980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.079992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.715 [2024-12-06 12:22:07.080211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.715 [2024-12-06 12:22:07.080420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.715 [2024-12-06 12:22:07.080434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.080659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.080978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.080992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.716 [2024-12-06 12:22:07.081311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.716 [2024-12-06 12:22:07.081388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.716 [2024-12-06 12:22:07.081402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.081414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.081447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.081474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.081501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.081527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.081953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.081979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.081993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.717 [2024-12-06 12:22:07.082419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.082445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.082478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.717 [2024-12-06 12:22:07.082493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.717 [2024-12-06 12:22:07.082506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.718 [2024-12-06 12:22:07.082533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.718 [2024-12-06 12:22:07.082560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.718 [2024-12-06 12:22:07.082601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.718 [2024-12-06 12:22:07.082627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd6700 is same with the state(6) to be set 00:14:31.718 [2024-12-06 12:22:07.082656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118152 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118608 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118616 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118624 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118632 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118640 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118648 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.082968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.082981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.082990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.082999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118656 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118664 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118672 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118680 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118688 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118696 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118704 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118712 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118720 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118728 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118736 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118744 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118752 len:8 PRP1 0x0 PRP2 0x0 00:14:31.718 [2024-12-06 12:22:07.083663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.718 [2024-12-06 12:22:07.083676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.718 [2024-12-06 12:22:07.083685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.718 [2024-12-06 12:22:07.083695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118760 len:8 PRP1 0x0 PRP2 0x0 00:14:31.719 [2024-12-06 12:22:07.083707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:07.083753] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:14:31.719 [2024-12-06 12:22:07.083806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.719 [2024-12-06 12:22:07.083827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:07.083841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.719 [2024-12-06 12:22:07.083853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:07.083867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.719 [2024-12-06 12:22:07.083879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:07.083893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.719 [2024-12-06 12:22:07.083905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:07.083917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:14:31.719 [2024-12-06 12:22:07.087471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:14:31.719 [2024-12-06 12:22:07.087510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27c60 (9): Bad file descriptor 00:14:31.719 [2024-12-06 12:22:07.109487] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:14:31.719 9654.40 IOPS, 37.71 MiB/s [2024-12-06T12:22:18.377Z] 9779.67 IOPS, 38.20 MiB/s [2024-12-06T12:22:18.377Z] 9876.57 IOPS, 38.58 MiB/s [2024-12-06T12:22:18.377Z] 9936.25 IOPS, 38.81 MiB/s [2024-12-06T12:22:18.377Z] 9979.33 IOPS, 38.98 MiB/s [2024-12-06T12:22:18.377Z] [2024-12-06 12:22:11.653058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.719 [2024-12-06 12:22:11.653773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.653971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.653990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.654005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.654018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.654032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.654044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.654058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.719 [2024-12-06 12:22:11.654070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.719 [2024-12-06 12:22:11.654084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.654476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.654975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.654989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.655001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.655015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.720 [2024-12-06 12:22:11.655028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.655041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.655055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.720 [2024-12-06 12:22:11.655069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.720 [2024-12-06 12:22:11.655087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.655527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.655977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.721 [2024-12-06 12:22:11.655989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.721 [2024-12-06 12:22:11.656344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.721 [2024-12-06 12:22:11.656374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.722 [2024-12-06 12:22:11.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.722 [2024-12-06 12:22:11.656416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.722 [2024-12-06 12:22:11.656444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.722 [2024-12-06 12:22:11.656471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd63c0 is same with the state(6) to be set 00:14:31.722 [2024-12-06 12:22:11.656500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109464 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109936 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109944 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109952 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109960 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109968 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109976 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109984 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109992 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110000 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.656964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.656973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110008 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.656985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.656996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110016 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110024 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110032 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110040 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110048 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110056 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.722 [2024-12-06 12:22:11.657286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.722 [2024-12-06 12:22:11.657299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110064 len:8 PRP1 0x0 PRP2 0x0 00:14:31.722 [2024-12-06 12:22:11.657311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657357] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:14:31.722 [2024-12-06 12:22:11.657410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.722 [2024-12-06 12:22:11.657431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.722 [2024-12-06 12:22:11.657467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.722 [2024-12-06 12:22:11.657493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.722 [2024-12-06 12:22:11.657518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.722 [2024-12-06 12:22:11.657531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:14:31.722 [2024-12-06 12:22:11.661021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:14:31.723 [2024-12-06 12:22:11.661057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27c60 (9): Bad file descriptor 00:14:31.723 [2024-12-06 12:22:11.691320] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:14:31.723 9959.20 IOPS, 38.90 MiB/s [2024-12-06T12:22:18.381Z] 9992.73 IOPS, 39.03 MiB/s [2024-12-06T12:22:18.381Z] 10020.00 IOPS, 39.14 MiB/s [2024-12-06T12:22:18.381Z] 10043.08 IOPS, 39.23 MiB/s [2024-12-06T12:22:18.381Z] 10066.29 IOPS, 39.32 MiB/s [2024-12-06T12:22:18.381Z] 10082.13 IOPS, 39.38 MiB/s 00:14:31.723 Latency(us) 00:14:31.723 [2024-12-06T12:22:18.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.723 Verification LBA range: start 0x0 length 0x4000 00:14:31.723 NVMe0n1 : 15.01 10084.66 39.39 229.01 0.00 12381.90 569.72 16443.58 00:14:31.723 [2024-12-06T12:22:18.381Z] =================================================================================================================== 00:14:31.723 [2024-12-06T12:22:18.381Z] Total : 10084.66 39.39 229.01 0.00 12381.90 569.72 16443.58 00:14:31.723 Received shutdown signal, test time was about 15.000000 seconds 00:14:31.723 00:14:31.723 Latency(us) 00:14:31.723 [2024-12-06T12:22:18.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.723 [2024-12-06T12:22:18.381Z] =================================================================================================================== 00:14:31.723 [2024-12-06T12:22:18.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:31.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74849 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74849 /var/tmp/bdevperf.sock 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74849 ']' 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:31.723 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:31.723 [2024-12-06 12:22:18.072037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:31.723 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:31.723 [2024-12-06 12:22:18.296172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:31.723 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:31.982 NVMe0n1 00:14:31.982 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:32.241 00:14:32.500 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:32.759 00:14:32.759 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:32.759 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:33.018 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:33.277 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:36.568 12:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:36.568 12:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:36.568 12:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74924 00:14:36.568 12:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:36.568 12:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 74924 00:14:37.945 { 00:14:37.945 "results": [ 00:14:37.945 { 00:14:37.945 "job": "NVMe0n1", 00:14:37.945 "core_mask": "0x1", 00:14:37.945 "workload": "verify", 00:14:37.945 "status": "finished", 00:14:37.945 "verify_range": { 00:14:37.945 "start": 0, 00:14:37.945 "length": 16384 00:14:37.945 }, 00:14:37.945 "queue_depth": 128, 00:14:37.945 "io_size": 4096, 00:14:37.945 "runtime": 1.014859, 00:14:37.945 "iops": 7861.190569330321, 00:14:37.945 "mibps": 30.707775661446565, 00:14:37.945 "io_failed": 0, 00:14:37.945 "io_timeout": 0, 00:14:37.945 "avg_latency_us": 16219.603648214406, 00:14:37.945 "min_latency_us": 1980.9745454545455, 00:14:37.946 "max_latency_us": 14834.967272727272 00:14:37.946 } 00:14:37.946 ], 00:14:37.946 "core_count": 1 00:14:37.946 } 00:14:37.946 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:37.946 [2024-12-06 12:22:17.549839] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:37.946 [2024-12-06 12:22:17.549946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:14:37.946 [2024-12-06 12:22:17.690651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.946 [2024-12-06 12:22:17.720833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.946 [2024-12-06 12:22:17.748596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.946 [2024-12-06 12:22:19.707810] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:37.946 [2024-12-06 12:22:19.707920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.946 [2024-12-06 12:22:19.707945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.946 [2024-12-06 12:22:19.707962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.946 [2024-12-06 12:22:19.707975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.946 [2024-12-06 12:22:19.707989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.946 [2024-12-06 12:22:19.708001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.946 [2024-12-06 12:22:19.708014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.946 [2024-12-06 12:22:19.708026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.946 [2024-12-06 12:22:19.708040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:14:37.946 [2024-12-06 12:22:19.708085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:14:37.946 [2024-12-06 12:22:19.708127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1cc60 (9): Bad file descriptor 00:14:37.946 [2024-12-06 12:22:19.717209] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:14:37.946 Running I/O for 1 seconds... 00:14:37.946 7840.00 IOPS, 30.62 MiB/s 00:14:37.946 Latency(us) 00:14:37.946 [2024-12-06T12:22:24.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.946 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.946 Verification LBA range: start 0x0 length 0x4000 00:14:37.946 NVMe0n1 : 1.01 7861.19 30.71 0.00 0.00 16219.60 1980.97 14834.97 00:14:37.946 [2024-12-06T12:22:24.604Z] =================================================================================================================== 00:14:37.946 [2024-12-06T12:22:24.604Z] Total : 7861.19 30.71 0.00 0.00 16219.60 1980.97 14834.97 00:14:37.946 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:37.946 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:37.946 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:38.205 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:38.205 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:38.464 12:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:38.722 12:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74849 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74849 ']' 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74849 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74849 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.006 killing process with pid 74849 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74849' 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74849 00:14:42.006 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74849 00:14:42.265 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:42.265 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.525 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.525 rmmod nvme_tcp 00:14:42.525 rmmod nvme_fabrics 00:14:42.525 rmmod nvme_keyring 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74605 ']' 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74605 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74605 ']' 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74605 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74605 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:42.525 killing process with pid 74605 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74605' 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74605 00:14:42.525 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74605 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.784 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:14:42.784 00:14:42.784 real 0m31.629s 00:14:42.784 user 2m1.730s 00:14:42.784 sys 0m5.214s 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:43.045 ************************************ 00:14:43.045 END TEST nvmf_failover 00:14:43.045 ************************************ 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:43.045 ************************************ 00:14:43.045 START TEST nvmf_host_discovery 00:14:43.045 ************************************ 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:43.045 * Looking for test storage... 00:14:43.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:43.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.045 --rc genhtml_branch_coverage=1 00:14:43.045 --rc genhtml_function_coverage=1 00:14:43.045 --rc genhtml_legend=1 00:14:43.045 --rc geninfo_all_blocks=1 00:14:43.045 --rc geninfo_unexecuted_blocks=1 00:14:43.045 00:14:43.045 ' 00:14:43.045 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:43.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.045 --rc genhtml_branch_coverage=1 00:14:43.045 --rc genhtml_function_coverage=1 00:14:43.045 --rc genhtml_legend=1 00:14:43.046 --rc geninfo_all_blocks=1 00:14:43.046 --rc geninfo_unexecuted_blocks=1 00:14:43.046 00:14:43.046 ' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:43.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.046 --rc genhtml_branch_coverage=1 00:14:43.046 --rc genhtml_function_coverage=1 00:14:43.046 --rc genhtml_legend=1 00:14:43.046 --rc geninfo_all_blocks=1 00:14:43.046 --rc geninfo_unexecuted_blocks=1 00:14:43.046 00:14:43.046 ' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:43.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.046 --rc genhtml_branch_coverage=1 00:14:43.046 --rc genhtml_function_coverage=1 00:14:43.046 --rc genhtml_legend=1 00:14:43.046 --rc geninfo_all_blocks=1 00:14:43.046 --rc geninfo_unexecuted_blocks=1 00:14:43.046 00:14:43.046 ' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.046 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.046 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:43.325 Cannot find device "nvmf_init_br" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:43.325 Cannot find device "nvmf_init_br2" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:43.325 Cannot find device "nvmf_tgt_br" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.325 Cannot find device "nvmf_tgt_br2" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:43.325 Cannot find device "nvmf_init_br" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:43.325 Cannot find device "nvmf_init_br2" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:43.325 Cannot find device "nvmf_tgt_br" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:43.325 Cannot find device "nvmf_tgt_br2" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:43.325 Cannot find device "nvmf_br" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:43.325 Cannot find device "nvmf_init_if" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:43.325 Cannot find device "nvmf_init_if2" 00:14:43.325 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.326 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:43.597 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:43.597 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.597 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:43.597 12:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:43.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:43.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:43.597 00:14:43.597 --- 10.0.0.3 ping statistics --- 00:14:43.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.597 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:43.597 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:43.597 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:14:43.597 00:14:43.597 --- 10.0.0.4 ping statistics --- 00:14:43.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.597 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:43.597 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:43.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:43.597 00:14:43.597 --- 10.0.0.1 ping statistics --- 00:14:43.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.598 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:43.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:43.598 00:14:43.598 --- 10.0.0.2 ping statistics --- 00:14:43.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.598 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75242 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75242 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75242 ']' 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.598 12:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.598 [2024-12-06 12:22:30.143975] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:43.598 [2024-12-06 12:22:30.144081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.866 [2024-12-06 12:22:30.286595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.866 [2024-12-06 12:22:30.314292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.866 [2024-12-06 12:22:30.314361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.866 [2024-12-06 12:22:30.314387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.866 [2024-12-06 12:22:30.314394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.866 [2024-12-06 12:22:30.314400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.867 [2024-12-06 12:22:30.314750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.867 [2024-12-06 12:22:30.343312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 [2024-12-06 12:22:31.169959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 [2024-12-06 12:22:31.178063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 null0 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 null1 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75274 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75274 /tmp/host.sock 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75274 ']' 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.802 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.802 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.802 [2024-12-06 12:22:31.266048] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:44.802 [2024-12-06 12:22:31.266150] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75274 ] 00:14:44.802 [2024-12-06 12:22:31.412875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.802 [2024-12-06 12:22:31.441503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.061 [2024-12-06 12:22:31.469809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:45.061 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.062 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:45.320 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 [2024-12-06 12:22:31.882252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:45.321 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:45.579 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:45.580 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:45.580 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:45.580 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:45.580 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:45.580 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.580 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:14:45.580 12:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:14:46.145 [2024-12-06 12:22:32.532716] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:46.145 [2024-12-06 12:22:32.532741] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:46.146 [2024-12-06 12:22:32.532765] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:46.146 [2024-12-06 12:22:32.538755] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:14:46.146 [2024-12-06 12:22:32.593028] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:14:46.146 [2024-12-06 12:22:32.594002] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ac3da0:1 started. 00:14:46.146 [2024-12-06 12:22:32.595760] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:46.146 [2024-12-06 12:22:32.595946] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:46.146 [2024-12-06 12:22:32.601253] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ac3da0 was disconnected and freed. delete nvme_qpair. 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:46.713 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:46.714 [2024-12-06 12:22:33.344421] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ad2190:1 started. 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.714 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.714 [2024-12-06 12:22:33.351857] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ad2190 was disconnected and freed. delete nvme_qpair. 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.972 [2024-12-06 12:22:33.463437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:46.972 [2024-12-06 12:22:33.464158] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:46.972 [2024-12-06 12:22:33.464195] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:46.972 [2024-12-06 12:22:33.470169] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.972 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.972 [2024-12-06 12:22:33.528579] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:14:46.972 [2024-12-06 12:22:33.528614] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:46.972 [2024-12-06 12:22:33.528623] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:46.973 [2024-12-06 12:22:33.528627] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:46.973 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.232 [2024-12-06 12:22:33.699955] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:47.232 [2024-12-06 12:22:33.700127] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:47.232 [2024-12-06 12:22:33.703107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.232 [2024-12-06 12:22:33.703141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.232 [2024-12-06 12:22:33.703168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.232 [2024-12-06 12:22:33.703192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.232 [2024-12-06 12:22:33.703388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.232 [2024-12-06 12:22:33.703409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.232 [2024-12-06 12:22:33.703420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.232 [2024-12-06 12:22:33.703429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.232 [2024-12-06 12:22:33.703440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9ffb0 is same with the state(6) to be set 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.232 [2024-12-06 12:22:33.705972] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.232 [2024-12-06 12:22:33.705992] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:47.232 [2024-12-06 12:22:33.706038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9ffb0 (9): Bad file descriptor 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:47.232 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:47.233 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.233 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.233 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:47.233 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.492 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:47.492 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:47.493 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:47.493 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.493 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 [2024-12-06 12:22:35.123452] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:48.871 [2024-12-06 12:22:35.123651] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:48.871 [2024-12-06 12:22:35.123683] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:48.871 [2024-12-06 12:22:35.129485] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:14:48.871 [2024-12-06 12:22:35.187797] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:14:48.871 [2024-12-06 12:22:35.188593] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1ac6c00:1 started. 00:14:48.871 [2024-12-06 12:22:35.190592] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:48.871 [2024-12-06 12:22:35.190769] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.871 [2024-12-06 12:22:35.192598] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:48.871 ac6c00 was disconnected and freed. delete nvme_qpair. 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 request: 00:14:48.871 { 00:14:48.871 "name": "nvme", 00:14:48.871 "trtype": "tcp", 00:14:48.871 "traddr": "10.0.0.3", 00:14:48.871 "adrfam": "ipv4", 00:14:48.871 "trsvcid": "8009", 00:14:48.871 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:48.871 "wait_for_attach": true, 00:14:48.871 "method": "bdev_nvme_start_discovery", 00:14:48.871 "req_id": 1 00:14:48.871 } 00:14:48.871 Got JSON-RPC error response 00:14:48.871 response: 00:14:48.871 { 00:14:48.871 "code": -17, 00:14:48.871 "message": "File exists" 00:14:48.871 } 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.871 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.871 request: 00:14:48.871 { 00:14:48.872 "name": "nvme_second", 00:14:48.872 "trtype": "tcp", 00:14:48.872 "traddr": "10.0.0.3", 00:14:48.872 "adrfam": "ipv4", 00:14:48.872 "trsvcid": "8009", 00:14:48.872 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:48.872 "wait_for_attach": true, 00:14:48.872 "method": "bdev_nvme_start_discovery", 00:14:48.872 "req_id": 1 00:14:48.872 } 00:14:48.872 Got JSON-RPC error response 00:14:48.872 response: 00:14:48.872 { 00:14:48.872 "code": -17, 00:14:48.872 "message": "File exists" 00:14:48.872 } 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.872 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:49.815 [2024-12-06 12:22:36.459115] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:49.815 [2024-12-06 12:22:36.459173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3bb0 with addr=10.0.0.3, port=8010 00:14:49.815 [2024-12-06 12:22:36.459242] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:49.815 [2024-12-06 12:22:36.459270] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:49.815 [2024-12-06 12:22:36.459278] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:51.194 [2024-12-06 12:22:37.459096] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:51.194 [2024-12-06 12:22:37.459151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad35d0 with addr=10.0.0.3, port=8010 00:14:51.194 [2024-12-06 12:22:37.459167] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:51.194 [2024-12-06 12:22:37.459190] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:51.194 [2024-12-06 12:22:37.459225] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:52.134 [2024-12-06 12:22:38.459029] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:14:52.134 request: 00:14:52.134 { 00:14:52.134 "name": "nvme_second", 00:14:52.134 "trtype": "tcp", 00:14:52.134 "traddr": "10.0.0.3", 00:14:52.134 "adrfam": "ipv4", 00:14:52.134 "trsvcid": "8010", 00:14:52.134 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:52.134 "wait_for_attach": false, 00:14:52.134 "attach_timeout_ms": 3000, 00:14:52.134 "method": "bdev_nvme_start_discovery", 00:14:52.134 "req_id": 1 00:14:52.134 } 00:14:52.134 Got JSON-RPC error response 00:14:52.134 response: 00:14:52.134 { 00:14:52.134 "code": -110, 00:14:52.134 "message": "Connection timed out" 00:14:52.134 } 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75274 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.134 rmmod nvme_tcp 00:14:52.134 rmmod nvme_fabrics 00:14:52.134 rmmod nvme_keyring 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75242 ']' 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75242 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75242 ']' 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75242 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75242 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:52.134 killing process with pid 75242 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75242' 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75242 00:14:52.134 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75242 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.394 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:14:52.394 00:14:52.394 real 0m9.545s 00:14:52.394 user 0m17.839s 00:14:52.394 sys 0m1.852s 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.394 ************************************ 00:14:52.394 END TEST nvmf_host_discovery 00:14:52.394 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:52.394 ************************************ 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:52.655 ************************************ 00:14:52.655 START TEST nvmf_host_multipath_status 00:14:52.655 ************************************ 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:52.655 * Looking for test storage... 00:14:52.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.655 --rc genhtml_branch_coverage=1 00:14:52.655 --rc genhtml_function_coverage=1 00:14:52.655 --rc genhtml_legend=1 00:14:52.655 --rc geninfo_all_blocks=1 00:14:52.655 --rc geninfo_unexecuted_blocks=1 00:14:52.655 00:14:52.655 ' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.655 --rc genhtml_branch_coverage=1 00:14:52.655 --rc genhtml_function_coverage=1 00:14:52.655 --rc genhtml_legend=1 00:14:52.655 --rc geninfo_all_blocks=1 00:14:52.655 --rc geninfo_unexecuted_blocks=1 00:14:52.655 00:14:52.655 ' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.655 --rc genhtml_branch_coverage=1 00:14:52.655 --rc genhtml_function_coverage=1 00:14:52.655 --rc genhtml_legend=1 00:14:52.655 --rc geninfo_all_blocks=1 00:14:52.655 --rc geninfo_unexecuted_blocks=1 00:14:52.655 00:14:52.655 ' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.655 --rc genhtml_branch_coverage=1 00:14:52.655 --rc genhtml_function_coverage=1 00:14:52.655 --rc genhtml_legend=1 00:14:52.655 --rc geninfo_all_blocks=1 00:14:52.655 --rc geninfo_unexecuted_blocks=1 00:14:52.655 00:14:52.655 ' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.655 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.656 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.916 Cannot find device "nvmf_init_br" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.916 Cannot find device "nvmf_init_br2" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.916 Cannot find device "nvmf_tgt_br" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.916 Cannot find device "nvmf_tgt_br2" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.916 Cannot find device "nvmf_init_br" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.916 Cannot find device "nvmf_init_br2" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.916 Cannot find device "nvmf_tgt_br" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.916 Cannot find device "nvmf_tgt_br2" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.916 Cannot find device "nvmf_br" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.916 Cannot find device "nvmf_init_if" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.916 Cannot find device "nvmf_init_if2" 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.916 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:53.176 00:14:53.176 --- 10.0.0.3 ping statistics --- 00:14:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.176 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.176 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.176 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:14:53.176 00:14:53.176 --- 10.0.0.4 ping statistics --- 00:14:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.176 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:53.176 00:14:53.176 --- 10.0.0.1 ping statistics --- 00:14:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.176 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:53.176 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:53.176 00:14:53.176 --- 10.0.0.2 ping statistics --- 00:14:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.176 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=75770 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 75770 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75770 ']' 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.177 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:53.177 [2024-12-06 12:22:39.753096] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:53.177 [2024-12-06 12:22:39.753215] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.436 [2024-12-06 12:22:39.895410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:53.436 [2024-12-06 12:22:39.923977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.436 [2024-12-06 12:22:39.924035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.436 [2024-12-06 12:22:39.924060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.436 [2024-12-06 12:22:39.924066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.436 [2024-12-06 12:22:39.924072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.436 [2024-12-06 12:22:39.924906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.436 [2024-12-06 12:22:39.924915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.436 [2024-12-06 12:22:39.952192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75770 00:14:53.436 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:53.695 [2024-12-06 12:22:40.342514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.954 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:53.954 Malloc0 00:14:53.954 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:54.529 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.529 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:54.787 [2024-12-06 12:22:41.380636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.787 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:55.046 [2024-12-06 12:22:41.596682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75818 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75818 /var/tmp/bdevperf.sock 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75818 ']' 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.046 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:55.983 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.983 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:55.983 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:56.243 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:56.812 Nvme0n1 00:14:56.812 12:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:57.071 Nvme0n1 00:14:57.071 12:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:57.071 12:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:58.977 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:58.977 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:59.235 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:59.495 12:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:00.433 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:00.433 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:00.433 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.433 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:00.692 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.692 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:00.692 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.692 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:00.952 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:00.952 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:00.952 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.952 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:01.211 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:01.211 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:01.211 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.211 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:01.470 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:01.470 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:01.470 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:01.470 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.729 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:01.729 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:01.729 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.729 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:01.989 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:01.989 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:01.989 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:02.248 12:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:02.508 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:03.444 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:03.444 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:03.444 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.444 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:03.702 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:03.702 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:03.961 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.962 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:03.962 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:03.962 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:03.962 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:03.962 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:04.530 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:04.530 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:04.530 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:04.530 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:04.530 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:04.530 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:04.530 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:04.530 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:04.790 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:04.790 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:04.790 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:04.790 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:05.049 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:05.049 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:05.049 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:05.308 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:05.567 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:06.504 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:06.504 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:06.504 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.504 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:06.762 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:06.762 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:06.762 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:06.762 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:07.020 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:07.020 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:07.020 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:07.020 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:07.278 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:07.278 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:07.278 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:07.278 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:07.536 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:07.536 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:07.536 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:07.536 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:07.794 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:07.794 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:07.794 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:07.794 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:08.359 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:08.359 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:08.359 12:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:08.359 12:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:08.616 12:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:09.989 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:09.989 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:09.989 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.990 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:09.990 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:09.990 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:09.990 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:09.990 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:10.248 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:10.248 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:10.248 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:10.248 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:10.506 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:10.506 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:10.506 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:10.506 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:10.765 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:10.765 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:10.765 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:10.765 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:11.024 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:11.024 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:11.024 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:11.024 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:11.357 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:11.357 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:11.357 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:11.646 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:11.910 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:12.848 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:12.848 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:12.848 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:12.848 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:13.107 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:13.107 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:13.107 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:13.107 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:13.366 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:13.366 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:13.366 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:13.366 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:13.625 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:13.625 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:13.625 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:13.625 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:13.884 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:13.884 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:13.884 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:13.884 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.143 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:14.143 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:14.143 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.143 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:14.402 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:14.402 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:14.402 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:14.661 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:14.920 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:15.857 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:15.857 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:15.858 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.858 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:16.117 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:16.117 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:16.117 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.117 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:16.377 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.377 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:16.377 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.377 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:16.636 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.636 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:16.636 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.636 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:16.896 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.896 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:16.896 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.896 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:17.155 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:17.155 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:17.155 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:17.155 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:17.414 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:17.414 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:17.673 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:17.673 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:17.931 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:18.190 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:19.123 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:19.123 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:19.123 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.123 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:19.382 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.382 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:19.382 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:19.382 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.641 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.641 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:19.641 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.641 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:19.901 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.901 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:19.901 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.901 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:20.160 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.160 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:20.160 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.160 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:20.420 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.420 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:20.420 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:20.420 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.679 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.679 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:20.679 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:20.937 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:21.194 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:22.126 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:22.126 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:22.126 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.126 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.693 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:23.260 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.260 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:23.261 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.261 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:23.520 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.520 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:23.520 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.520 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:23.779 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.779 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:23.779 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.779 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:24.039 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.039 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:24.039 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:24.299 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:24.558 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:25.494 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:25.494 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:25.494 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.494 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:25.753 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.753 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:25.753 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.753 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:26.011 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.011 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:26.011 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:26.011 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.271 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.271 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:26.271 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.271 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:26.530 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.530 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:26.530 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.530 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:26.790 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.790 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:26.790 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.790 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:27.049 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:27.049 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:27.049 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:27.308 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:27.566 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.941 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:29.198 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:29.199 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:29.199 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.199 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:29.458 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.458 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:29.458 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:29.458 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.715 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.715 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:29.715 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.715 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:29.973 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.973 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:29.973 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.973 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75818 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75818 ']' 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75818 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75818 00:15:30.231 killing process with pid 75818 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75818' 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75818 00:15:30.231 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75818 00:15:30.231 { 00:15:30.231 "results": [ 00:15:30.231 { 00:15:30.231 "job": "Nvme0n1", 00:15:30.231 "core_mask": "0x4", 00:15:30.231 "workload": "verify", 00:15:30.231 "status": "terminated", 00:15:30.231 "verify_range": { 00:15:30.231 "start": 0, 00:15:30.231 "length": 16384 00:15:30.231 }, 00:15:30.231 "queue_depth": 128, 00:15:30.231 "io_size": 4096, 00:15:30.231 "runtime": 33.215001, 00:15:30.231 "iops": 9943.12780541539, 00:15:30.231 "mibps": 38.84034298990387, 00:15:30.231 "io_failed": 0, 00:15:30.231 "io_timeout": 0, 00:15:30.231 "avg_latency_us": 12845.625208624253, 00:15:30.231 "min_latency_us": 804.3054545454545, 00:15:30.231 "max_latency_us": 4026531.84 00:15:30.231 } 00:15:30.231 ], 00:15:30.231 "core_count": 1 00:15:30.231 } 00:15:30.491 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75818 00:15:30.491 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:30.491 [2024-12-06 12:22:41.668819] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:15:30.491 [2024-12-06 12:22:41.668919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75818 ] 00:15:30.491 [2024-12-06 12:22:41.814024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.491 [2024-12-06 12:22:41.843503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.491 [2024-12-06 12:22:41.870614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.491 Running I/O for 90 seconds... 00:15:30.491 9832.00 IOPS, 38.41 MiB/s [2024-12-06T12:23:17.149Z] 10280.00 IOPS, 40.16 MiB/s [2024-12-06T12:23:17.149Z] 10496.00 IOPS, 41.00 MiB/s [2024-12-06T12:23:17.149Z] 10537.00 IOPS, 41.16 MiB/s [2024-12-06T12:23:17.149Z] 10489.60 IOPS, 40.98 MiB/s [2024-12-06T12:23:17.149Z] 10516.83 IOPS, 41.08 MiB/s [2024-12-06T12:23:17.149Z] 10497.86 IOPS, 41.01 MiB/s [2024-12-06T12:23:17.149Z] 10487.62 IOPS, 40.97 MiB/s [2024-12-06T12:23:17.149Z] 10521.33 IOPS, 41.10 MiB/s [2024-12-06T12:23:17.149Z] 10550.00 IOPS, 41.21 MiB/s [2024-12-06T12:23:17.149Z] 10545.09 IOPS, 41.19 MiB/s [2024-12-06T12:23:17.149Z] 10565.67 IOPS, 41.27 MiB/s [2024-12-06T12:23:17.149Z] 10582.46 IOPS, 41.34 MiB/s [2024-12-06T12:23:17.149Z] 10583.14 IOPS, 41.34 MiB/s [2024-12-06T12:23:17.149Z] [2024-12-06 12:22:58.134059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.491 [2024-12-06 12:22:58.134407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.491 [2024-12-06 12:22:58.134799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:30.491 [2024-12-06 12:22:58.134817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.134830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.134848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.134861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.134889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.134903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.134922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.134935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.134966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.134999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.135564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.135977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.135995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.136008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.136039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.136070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.136101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.136979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.136997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.137016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.137036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.492 [2024-12-06 12:22:58.137049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.137067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.137079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.137098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.137111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.137129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.137141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.137160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.492 [2024-12-06 12:22:58.137172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:30.492 [2024-12-06 12:22:58.137203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.137599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.137970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.137989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.138001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:22:58.138902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.138945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.138970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.138983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.139008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.139022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.139047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.139060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.139084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.139097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.139121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.139135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.139159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.139172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:22:58.139261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:22:58.139314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:30.493 10193.87 IOPS, 39.82 MiB/s [2024-12-06T12:23:17.151Z] 9556.75 IOPS, 37.33 MiB/s [2024-12-06T12:23:17.151Z] 8994.59 IOPS, 35.14 MiB/s [2024-12-06T12:23:17.151Z] 8494.89 IOPS, 33.18 MiB/s [2024-12-06T12:23:17.151Z] 8349.84 IOPS, 32.62 MiB/s [2024-12-06T12:23:17.151Z] 8449.35 IOPS, 33.01 MiB/s [2024-12-06T12:23:17.151Z] 8592.62 IOPS, 33.56 MiB/s [2024-12-06T12:23:17.151Z] 8850.77 IOPS, 34.57 MiB/s [2024-12-06T12:23:17.151Z] 9063.61 IOPS, 35.40 MiB/s [2024-12-06T12:23:17.151Z] 9247.62 IOPS, 36.12 MiB/s [2024-12-06T12:23:17.151Z] 9313.72 IOPS, 36.38 MiB/s [2024-12-06T12:23:17.151Z] 9358.42 IOPS, 36.56 MiB/s [2024-12-06T12:23:17.151Z] 9392.19 IOPS, 36.69 MiB/s [2024-12-06T12:23:17.151Z] 9517.18 IOPS, 37.18 MiB/s [2024-12-06T12:23:17.151Z] 9673.59 IOPS, 37.79 MiB/s [2024-12-06T12:23:17.151Z] 9814.40 IOPS, 38.34 MiB/s [2024-12-06T12:23:17.151Z] [2024-12-06 12:23:14.144092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.144153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.144229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.144272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.144295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.144309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.144328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.144341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.493 [2024-12-06 12:23:14.145883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.145983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.493 [2024-12-06 12:23:14.145998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:30.493 [2024-12-06 12:23:14.146016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.494 [2024-12-06 12:23:14.146580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:30.494 [2024-12-06 12:23:14.146629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:30.494 [2024-12-06 12:23:14.146642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:30.494 9893.39 IOPS, 38.65 MiB/s [2024-12-06T12:23:17.152Z] 9922.69 IOPS, 38.76 MiB/s [2024-12-06T12:23:17.152Z] 9942.00 IOPS, 38.84 MiB/s [2024-12-06T12:23:17.152Z] Received shutdown signal, test time was about 33.215888 seconds 00:15:30.494 00:15:30.494 Latency(us) 00:15:30.494 [2024-12-06T12:23:17.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.494 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:30.494 Verification LBA range: start 0x0 length 0x4000 00:15:30.494 Nvme0n1 : 33.22 9943.13 38.84 0.00 0.00 12845.63 804.31 4026531.84 00:15:30.494 [2024-12-06T12:23:17.152Z] =================================================================================================================== 00:15:30.494 [2024-12-06T12:23:17.152Z] Total : 9943.13 38.84 0.00 0.00 12845.63 804.31 4026531.84 00:15:30.494 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.752 rmmod nvme_tcp 00:15:30.752 rmmod nvme_fabrics 00:15:30.752 rmmod nvme_keyring 00:15:30.752 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 75770 ']' 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 75770 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75770 ']' 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75770 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75770 00:15:31.011 killing process with pid 75770 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75770' 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75770 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75770 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:31.011 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:15:31.269 00:15:31.269 real 0m38.721s 00:15:31.269 user 2m6.107s 00:15:31.269 sys 0m10.715s 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:31.269 ************************************ 00:15:31.269 END TEST nvmf_host_multipath_status 00:15:31.269 ************************************ 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.269 ************************************ 00:15:31.269 START TEST nvmf_discovery_remove_ifc 00:15:31.269 ************************************ 00:15:31.269 12:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:31.528 * Looking for test storage... 00:15:31.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:31.528 12:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:31.528 12:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:31.528 12:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:15:31.528 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:31.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.529 --rc genhtml_branch_coverage=1 00:15:31.529 --rc genhtml_function_coverage=1 00:15:31.529 --rc genhtml_legend=1 00:15:31.529 --rc geninfo_all_blocks=1 00:15:31.529 --rc geninfo_unexecuted_blocks=1 00:15:31.529 00:15:31.529 ' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:31.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.529 --rc genhtml_branch_coverage=1 00:15:31.529 --rc genhtml_function_coverage=1 00:15:31.529 --rc genhtml_legend=1 00:15:31.529 --rc geninfo_all_blocks=1 00:15:31.529 --rc geninfo_unexecuted_blocks=1 00:15:31.529 00:15:31.529 ' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:31.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.529 --rc genhtml_branch_coverage=1 00:15:31.529 --rc genhtml_function_coverage=1 00:15:31.529 --rc genhtml_legend=1 00:15:31.529 --rc geninfo_all_blocks=1 00:15:31.529 --rc geninfo_unexecuted_blocks=1 00:15:31.529 00:15:31.529 ' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:31.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.529 --rc genhtml_branch_coverage=1 00:15:31.529 --rc genhtml_function_coverage=1 00:15:31.529 --rc genhtml_legend=1 00:15:31.529 --rc geninfo_all_blocks=1 00:15:31.529 --rc geninfo_unexecuted_blocks=1 00:15:31.529 00:15:31.529 ' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:31.529 Cannot find device "nvmf_init_br" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:31.529 Cannot find device "nvmf_init_br2" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:31.529 Cannot find device "nvmf_tgt_br" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.529 Cannot find device "nvmf_tgt_br2" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:31.529 Cannot find device "nvmf_init_br" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:31.529 Cannot find device "nvmf_init_br2" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:31.529 Cannot find device "nvmf_tgt_br" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:31.529 Cannot find device "nvmf_tgt_br2" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:31.529 Cannot find device "nvmf_br" 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:15:31.529 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:31.788 Cannot find device "nvmf_init_if" 00:15:31.788 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:15:31.788 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:31.788 Cannot find device "nvmf_init_if2" 00:15:31.788 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:15:31.788 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:31.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:31.789 00:15:31.789 --- 10.0.0.3 ping statistics --- 00:15:31.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.789 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:31.789 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:31.789 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:15:31.789 00:15:31.789 --- 10.0.0.4 ping statistics --- 00:15:31.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.789 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:31.789 00:15:31.789 --- 10.0.0.1 ping statistics --- 00:15:31.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.789 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:31.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:15:31.789 00:15:31.789 --- 10.0.0.2 ping statistics --- 00:15:31.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.789 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.789 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=76664 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 76664 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76664 ']' 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.047 12:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 [2024-12-06 12:23:18.502428] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:15:32.047 [2024-12-06 12:23:18.502532] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.047 [2024-12-06 12:23:18.646948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.047 [2024-12-06 12:23:18.674007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.047 [2024-12-06 12:23:18.674055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.047 [2024-12-06 12:23:18.674079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.047 [2024-12-06 12:23:18.674086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.047 [2024-12-06 12:23:18.674092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.047 [2024-12-06 12:23:18.674359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.047 [2024-12-06 12:23:18.701643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.982 [2024-12-06 12:23:19.496582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.982 [2024-12-06 12:23:19.504653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:32.982 null0 00:15:32.982 [2024-12-06 12:23:19.536563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76696 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76696 /tmp/host.sock 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76696 ']' 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.982 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.982 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:32.982 [2024-12-06 12:23:19.604362] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:15:32.983 [2024-12-06 12:23:19.604455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76696 ] 00:15:33.241 [2024-12-06 12:23:19.749404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.241 [2024-12-06 12:23:19.787721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.241 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:33.241 [2024-12-06 12:23:19.896713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.499 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.500 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:33.500 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.500 12:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:34.436 [2024-12-06 12:23:20.939223] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:34.436 [2024-12-06 12:23:20.939290] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:34.436 [2024-12-06 12:23:20.939337] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:34.436 [2024-12-06 12:23:20.945267] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:34.436 [2024-12-06 12:23:20.999648] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:34.436 [2024-12-06 12:23:21.000500] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9b7f00:1 started. 00:15:34.436 [2024-12-06 12:23:21.001968] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:34.436 [2024-12-06 12:23:21.002038] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:34.436 [2024-12-06 12:23:21.002063] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:34.436 [2024-12-06 12:23:21.002078] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:34.436 [2024-12-06 12:23:21.002099] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:34.436 [2024-12-06 12:23:21.008143] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9b7f00 was disconnected and freed. delete nvme_qpair. 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:34.436 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:34.709 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.709 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:34.709 12:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:35.659 12:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:36.596 12:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:37.974 12:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:38.910 12:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:39.846 12:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:39.846 [2024-12-06 12:23:26.430076] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:39.846 [2024-12-06 12:23:26.430159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.846 [2024-12-06 12:23:26.430200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.846 [2024-12-06 12:23:26.430213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.846 [2024-12-06 12:23:26.430222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.846 [2024-12-06 12:23:26.430232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.846 [2024-12-06 12:23:26.430240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.846 [2024-12-06 12:23:26.430249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.846 [2024-12-06 12:23:26.430258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.846 [2024-12-06 12:23:26.430267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.846 [2024-12-06 12:23:26.430276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.846 [2024-12-06 12:23:26.430285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993fc0 is same with the state(6) to be set 00:15:39.846 [2024-12-06 12:23:26.440072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x993fc0 (9): Bad file descriptor 00:15:39.846 [2024-12-06 12:23:26.450088] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:15:39.846 [2024-12-06 12:23:26.450124] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:15:39.846 [2024-12-06 12:23:26.450130] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:39.846 [2024-12-06 12:23:26.450135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:39.846 [2024-12-06 12:23:26.450181] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:40.784 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:41.044 [2024-12-06 12:23:27.477280] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:41.044 [2024-12-06 12:23:27.477385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x993fc0 with addr=10.0.0.3, port=4420 00:15:41.044 [2024-12-06 12:23:27.477413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993fc0 is same with the state(6) to be set 00:15:41.044 [2024-12-06 12:23:27.477461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x993fc0 (9): Bad file descriptor 00:15:41.044 [2024-12-06 12:23:27.478296] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:15:41.044 [2024-12-06 12:23:27.478385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:41.044 [2024-12-06 12:23:27.478409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:41.044 [2024-12-06 12:23:27.478433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:41.044 [2024-12-06 12:23:27.478451] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:41.044 [2024-12-06 12:23:27.478464] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:41.044 [2024-12-06 12:23:27.478475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:41.044 [2024-12-06 12:23:27.478494] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:41.044 [2024-12-06 12:23:27.478506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:41.044 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.044 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:41.044 12:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:41.983 [2024-12-06 12:23:28.478553] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:41.983 [2024-12-06 12:23:28.478598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:41.983 [2024-12-06 12:23:28.478619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:41.983 [2024-12-06 12:23:28.478644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:41.983 [2024-12-06 12:23:28.478652] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:15:41.983 [2024-12-06 12:23:28.478660] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:41.983 [2024-12-06 12:23:28.478665] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:41.983 [2024-12-06 12:23:28.478670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:41.983 [2024-12-06 12:23:28.478696] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:15:41.983 [2024-12-06 12:23:28.478726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.983 [2024-12-06 12:23:28.478741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.983 [2024-12-06 12:23:28.478752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.983 [2024-12-06 12:23:28.478760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.983 [2024-12-06 12:23:28.478769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.983 [2024-12-06 12:23:28.478777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.983 [2024-12-06 12:23:28.478785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.983 [2024-12-06 12:23:28.478793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.983 [2024-12-06 12:23:28.478801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.983 [2024-12-06 12:23:28.478809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.983 [2024-12-06 12:23:28.478816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:15:41.983 [2024-12-06 12:23:28.479432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91fa20 (9): Bad file descriptor 00:15:41.983 [2024-12-06 12:23:28.480429] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:41.983 [2024-12-06 12:23:28.480449] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:41.983 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:41.984 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.984 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:41.984 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:41.984 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.984 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:41.984 12:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:43.362 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:43.930 [2024-12-06 12:23:30.482955] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:43.930 [2024-12-06 12:23:30.482981] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:43.930 [2024-12-06 12:23:30.483014] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:43.930 [2024-12-06 12:23:30.488985] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:15:43.930 [2024-12-06 12:23:30.543319] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:15:43.930 [2024-12-06 12:23:30.543962] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x9c01d0:1 started. 00:15:43.930 [2024-12-06 12:23:30.545138] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:43.930 [2024-12-06 12:23:30.545209] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:43.930 [2024-12-06 12:23:30.545232] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:43.930 [2024-12-06 12:23:30.545248] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:15:43.930 [2024-12-06 12:23:30.545255] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:43.930 [2024-12-06 12:23:30.551752] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x9c01d0 was disconnected and freed. delete nvme_qpair. 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76696 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76696 ']' 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76696 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76696 00:15:44.190 killing process with pid 76696 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76696' 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76696 00:15:44.190 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76696 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.450 12:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.450 rmmod nvme_tcp 00:15:44.450 rmmod nvme_fabrics 00:15:44.450 rmmod nvme_keyring 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 76664 ']' 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 76664 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76664 ']' 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76664 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76664 00:15:44.450 killing process with pid 76664 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76664' 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76664 00:15:44.450 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76664 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:44.710 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:15:44.969 00:15:44.969 real 0m13.569s 00:15:44.969 user 0m22.956s 00:15:44.969 sys 0m2.323s 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.969 ************************************ 00:15:44.969 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.970 END TEST nvmf_discovery_remove_ifc 00:15:44.970 ************************************ 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.970 ************************************ 00:15:44.970 START TEST nvmf_identify_kernel_target 00:15:44.970 ************************************ 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:44.970 * Looking for test storage... 00:15:44.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:44.970 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:45.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.231 --rc genhtml_branch_coverage=1 00:15:45.231 --rc genhtml_function_coverage=1 00:15:45.231 --rc genhtml_legend=1 00:15:45.231 --rc geninfo_all_blocks=1 00:15:45.231 --rc geninfo_unexecuted_blocks=1 00:15:45.231 00:15:45.231 ' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:45.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.231 --rc genhtml_branch_coverage=1 00:15:45.231 --rc genhtml_function_coverage=1 00:15:45.231 --rc genhtml_legend=1 00:15:45.231 --rc geninfo_all_blocks=1 00:15:45.231 --rc geninfo_unexecuted_blocks=1 00:15:45.231 00:15:45.231 ' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:45.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.231 --rc genhtml_branch_coverage=1 00:15:45.231 --rc genhtml_function_coverage=1 00:15:45.231 --rc genhtml_legend=1 00:15:45.231 --rc geninfo_all_blocks=1 00:15:45.231 --rc geninfo_unexecuted_blocks=1 00:15:45.231 00:15:45.231 ' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:45.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.231 --rc genhtml_branch_coverage=1 00:15:45.231 --rc genhtml_function_coverage=1 00:15:45.231 --rc genhtml_legend=1 00:15:45.231 --rc geninfo_all_blocks=1 00:15:45.231 --rc geninfo_unexecuted_blocks=1 00:15:45.231 00:15:45.231 ' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.231 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.232 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:45.232 Cannot find device "nvmf_init_br" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:45.232 Cannot find device "nvmf_init_br2" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:45.232 Cannot find device "nvmf_tgt_br" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.232 Cannot find device "nvmf_tgt_br2" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:45.232 Cannot find device "nvmf_init_br" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:45.232 Cannot find device "nvmf_init_br2" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:45.232 Cannot find device "nvmf_tgt_br" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:45.232 Cannot find device "nvmf_tgt_br2" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:45.232 Cannot find device "nvmf_br" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:45.232 Cannot find device "nvmf_init_if" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:45.232 Cannot find device "nvmf_init_if2" 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.232 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:45.492 12:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:45.492 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:45.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:45.492 00:15:45.492 --- 10.0.0.3 ping statistics --- 00:15:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.493 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:45.493 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:45.493 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:15:45.493 00:15:45.493 --- 10.0.0.4 ping statistics --- 00:15:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.493 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:45.493 00:15:45.493 --- 10.0.0.1 ping statistics --- 00:15:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.493 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:45.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:45.493 00:15:45.493 --- 10.0.0.2 ping statistics --- 00:15:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.493 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:45.493 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:46.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.061 Waiting for block devices as requested 00:15:46.061 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:46.061 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:46.061 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:46.321 No valid GPT data, bailing 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:46.321 No valid GPT data, bailing 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:46.321 No valid GPT data, bailing 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:46.321 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:46.581 No valid GPT data, bailing 00:15:46.581 12:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -a 10.0.0.1 -t tcp -s 4420 00:15:46.581 00:15:46.581 Discovery Log Number of Records 2, Generation counter 2 00:15:46.581 =====Discovery Log Entry 0====== 00:15:46.581 trtype: tcp 00:15:46.581 adrfam: ipv4 00:15:46.581 subtype: current discovery subsystem 00:15:46.581 treq: not specified, sq flow control disable supported 00:15:46.581 portid: 1 00:15:46.581 trsvcid: 4420 00:15:46.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:46.581 traddr: 10.0.0.1 00:15:46.581 eflags: none 00:15:46.581 sectype: none 00:15:46.581 =====Discovery Log Entry 1====== 00:15:46.581 trtype: tcp 00:15:46.581 adrfam: ipv4 00:15:46.581 subtype: nvme subsystem 00:15:46.581 treq: not specified, sq flow control disable supported 00:15:46.581 portid: 1 00:15:46.581 trsvcid: 4420 00:15:46.581 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:46.581 traddr: 10.0.0.1 00:15:46.581 eflags: none 00:15:46.581 sectype: none 00:15:46.581 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:46.581 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:46.842 ===================================================== 00:15:46.842 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:46.842 ===================================================== 00:15:46.842 Controller Capabilities/Features 00:15:46.842 ================================ 00:15:46.842 Vendor ID: 0000 00:15:46.842 Subsystem Vendor ID: 0000 00:15:46.842 Serial Number: ded79d16b9219e0220f1 00:15:46.842 Model Number: Linux 00:15:46.842 Firmware Version: 6.8.9-20 00:15:46.842 Recommended Arb Burst: 0 00:15:46.842 IEEE OUI Identifier: 00 00 00 00:15:46.842 Multi-path I/O 00:15:46.842 May have multiple subsystem ports: No 00:15:46.842 May have multiple controllers: No 00:15:46.842 Associated with SR-IOV VF: No 00:15:46.842 Max Data Transfer Size: Unlimited 00:15:46.842 Max Number of Namespaces: 0 00:15:46.842 Max Number of I/O Queues: 1024 00:15:46.842 NVMe Specification Version (VS): 1.3 00:15:46.842 NVMe Specification Version (Identify): 1.3 00:15:46.842 Maximum Queue Entries: 1024 00:15:46.842 Contiguous Queues Required: No 00:15:46.842 Arbitration Mechanisms Supported 00:15:46.842 Weighted Round Robin: Not Supported 00:15:46.842 Vendor Specific: Not Supported 00:15:46.842 Reset Timeout: 7500 ms 00:15:46.842 Doorbell Stride: 4 bytes 00:15:46.842 NVM Subsystem Reset: Not Supported 00:15:46.842 Command Sets Supported 00:15:46.842 NVM Command Set: Supported 00:15:46.842 Boot Partition: Not Supported 00:15:46.842 Memory Page Size Minimum: 4096 bytes 00:15:46.842 Memory Page Size Maximum: 4096 bytes 00:15:46.842 Persistent Memory Region: Not Supported 00:15:46.842 Optional Asynchronous Events Supported 00:15:46.842 Namespace Attribute Notices: Not Supported 00:15:46.842 Firmware Activation Notices: Not Supported 00:15:46.842 ANA Change Notices: Not Supported 00:15:46.842 PLE Aggregate Log Change Notices: Not Supported 00:15:46.842 LBA Status Info Alert Notices: Not Supported 00:15:46.842 EGE Aggregate Log Change Notices: Not Supported 00:15:46.842 Normal NVM Subsystem Shutdown event: Not Supported 00:15:46.842 Zone Descriptor Change Notices: Not Supported 00:15:46.842 Discovery Log Change Notices: Supported 00:15:46.842 Controller Attributes 00:15:46.842 128-bit Host Identifier: Not Supported 00:15:46.842 Non-Operational Permissive Mode: Not Supported 00:15:46.842 NVM Sets: Not Supported 00:15:46.842 Read Recovery Levels: Not Supported 00:15:46.842 Endurance Groups: Not Supported 00:15:46.842 Predictable Latency Mode: Not Supported 00:15:46.842 Traffic Based Keep ALive: Not Supported 00:15:46.842 Namespace Granularity: Not Supported 00:15:46.842 SQ Associations: Not Supported 00:15:46.842 UUID List: Not Supported 00:15:46.842 Multi-Domain Subsystem: Not Supported 00:15:46.842 Fixed Capacity Management: Not Supported 00:15:46.842 Variable Capacity Management: Not Supported 00:15:46.842 Delete Endurance Group: Not Supported 00:15:46.842 Delete NVM Set: Not Supported 00:15:46.842 Extended LBA Formats Supported: Not Supported 00:15:46.842 Flexible Data Placement Supported: Not Supported 00:15:46.842 00:15:46.842 Controller Memory Buffer Support 00:15:46.842 ================================ 00:15:46.842 Supported: No 00:15:46.842 00:15:46.842 Persistent Memory Region Support 00:15:46.842 ================================ 00:15:46.842 Supported: No 00:15:46.842 00:15:46.842 Admin Command Set Attributes 00:15:46.842 ============================ 00:15:46.842 Security Send/Receive: Not Supported 00:15:46.842 Format NVM: Not Supported 00:15:46.842 Firmware Activate/Download: Not Supported 00:15:46.842 Namespace Management: Not Supported 00:15:46.842 Device Self-Test: Not Supported 00:15:46.842 Directives: Not Supported 00:15:46.842 NVMe-MI: Not Supported 00:15:46.842 Virtualization Management: Not Supported 00:15:46.842 Doorbell Buffer Config: Not Supported 00:15:46.842 Get LBA Status Capability: Not Supported 00:15:46.842 Command & Feature Lockdown Capability: Not Supported 00:15:46.842 Abort Command Limit: 1 00:15:46.842 Async Event Request Limit: 1 00:15:46.842 Number of Firmware Slots: N/A 00:15:46.842 Firmware Slot 1 Read-Only: N/A 00:15:46.842 Firmware Activation Without Reset: N/A 00:15:46.842 Multiple Update Detection Support: N/A 00:15:46.842 Firmware Update Granularity: No Information Provided 00:15:46.842 Per-Namespace SMART Log: No 00:15:46.842 Asymmetric Namespace Access Log Page: Not Supported 00:15:46.842 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:46.842 Command Effects Log Page: Not Supported 00:15:46.842 Get Log Page Extended Data: Supported 00:15:46.842 Telemetry Log Pages: Not Supported 00:15:46.842 Persistent Event Log Pages: Not Supported 00:15:46.842 Supported Log Pages Log Page: May Support 00:15:46.842 Commands Supported & Effects Log Page: Not Supported 00:15:46.842 Feature Identifiers & Effects Log Page:May Support 00:15:46.842 NVMe-MI Commands & Effects Log Page: May Support 00:15:46.842 Data Area 4 for Telemetry Log: Not Supported 00:15:46.842 Error Log Page Entries Supported: 1 00:15:46.842 Keep Alive: Not Supported 00:15:46.842 00:15:46.842 NVM Command Set Attributes 00:15:46.842 ========================== 00:15:46.842 Submission Queue Entry Size 00:15:46.842 Max: 1 00:15:46.842 Min: 1 00:15:46.842 Completion Queue Entry Size 00:15:46.843 Max: 1 00:15:46.843 Min: 1 00:15:46.843 Number of Namespaces: 0 00:15:46.843 Compare Command: Not Supported 00:15:46.843 Write Uncorrectable Command: Not Supported 00:15:46.843 Dataset Management Command: Not Supported 00:15:46.843 Write Zeroes Command: Not Supported 00:15:46.843 Set Features Save Field: Not Supported 00:15:46.843 Reservations: Not Supported 00:15:46.843 Timestamp: Not Supported 00:15:46.843 Copy: Not Supported 00:15:46.843 Volatile Write Cache: Not Present 00:15:46.843 Atomic Write Unit (Normal): 1 00:15:46.843 Atomic Write Unit (PFail): 1 00:15:46.843 Atomic Compare & Write Unit: 1 00:15:46.843 Fused Compare & Write: Not Supported 00:15:46.843 Scatter-Gather List 00:15:46.843 SGL Command Set: Supported 00:15:46.843 SGL Keyed: Not Supported 00:15:46.843 SGL Bit Bucket Descriptor: Not Supported 00:15:46.843 SGL Metadata Pointer: Not Supported 00:15:46.843 Oversized SGL: Not Supported 00:15:46.843 SGL Metadata Address: Not Supported 00:15:46.843 SGL Offset: Supported 00:15:46.843 Transport SGL Data Block: Not Supported 00:15:46.843 Replay Protected Memory Block: Not Supported 00:15:46.843 00:15:46.843 Firmware Slot Information 00:15:46.843 ========================= 00:15:46.843 Active slot: 0 00:15:46.843 00:15:46.843 00:15:46.843 Error Log 00:15:46.843 ========= 00:15:46.843 00:15:46.843 Active Namespaces 00:15:46.843 ================= 00:15:46.843 Discovery Log Page 00:15:46.843 ================== 00:15:46.843 Generation Counter: 2 00:15:46.843 Number of Records: 2 00:15:46.843 Record Format: 0 00:15:46.843 00:15:46.843 Discovery Log Entry 0 00:15:46.843 ---------------------- 00:15:46.843 Transport Type: 3 (TCP) 00:15:46.843 Address Family: 1 (IPv4) 00:15:46.843 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:46.843 Entry Flags: 00:15:46.843 Duplicate Returned Information: 0 00:15:46.843 Explicit Persistent Connection Support for Discovery: 0 00:15:46.843 Transport Requirements: 00:15:46.843 Secure Channel: Not Specified 00:15:46.843 Port ID: 1 (0x0001) 00:15:46.843 Controller ID: 65535 (0xffff) 00:15:46.843 Admin Max SQ Size: 32 00:15:46.843 Transport Service Identifier: 4420 00:15:46.843 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:46.843 Transport Address: 10.0.0.1 00:15:46.843 Discovery Log Entry 1 00:15:46.843 ---------------------- 00:15:46.843 Transport Type: 3 (TCP) 00:15:46.843 Address Family: 1 (IPv4) 00:15:46.843 Subsystem Type: 2 (NVM Subsystem) 00:15:46.843 Entry Flags: 00:15:46.843 Duplicate Returned Information: 0 00:15:46.843 Explicit Persistent Connection Support for Discovery: 0 00:15:46.843 Transport Requirements: 00:15:46.843 Secure Channel: Not Specified 00:15:46.843 Port ID: 1 (0x0001) 00:15:46.843 Controller ID: 65535 (0xffff) 00:15:46.843 Admin Max SQ Size: 32 00:15:46.843 Transport Service Identifier: 4420 00:15:46.843 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:46.843 Transport Address: 10.0.0.1 00:15:46.843 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:46.843 get_feature(0x01) failed 00:15:46.843 get_feature(0x02) failed 00:15:46.843 get_feature(0x04) failed 00:15:46.843 ===================================================== 00:15:46.843 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:46.843 ===================================================== 00:15:46.843 Controller Capabilities/Features 00:15:46.843 ================================ 00:15:46.843 Vendor ID: 0000 00:15:46.843 Subsystem Vendor ID: 0000 00:15:46.843 Serial Number: a9dee174438261dbbaec 00:15:46.843 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:46.843 Firmware Version: 6.8.9-20 00:15:46.843 Recommended Arb Burst: 6 00:15:46.843 IEEE OUI Identifier: 00 00 00 00:15:46.843 Multi-path I/O 00:15:46.843 May have multiple subsystem ports: Yes 00:15:46.843 May have multiple controllers: Yes 00:15:46.843 Associated with SR-IOV VF: No 00:15:46.843 Max Data Transfer Size: Unlimited 00:15:46.843 Max Number of Namespaces: 1024 00:15:46.843 Max Number of I/O Queues: 128 00:15:46.843 NVMe Specification Version (VS): 1.3 00:15:46.843 NVMe Specification Version (Identify): 1.3 00:15:46.843 Maximum Queue Entries: 1024 00:15:46.843 Contiguous Queues Required: No 00:15:46.843 Arbitration Mechanisms Supported 00:15:46.843 Weighted Round Robin: Not Supported 00:15:46.843 Vendor Specific: Not Supported 00:15:46.843 Reset Timeout: 7500 ms 00:15:46.843 Doorbell Stride: 4 bytes 00:15:46.843 NVM Subsystem Reset: Not Supported 00:15:46.843 Command Sets Supported 00:15:46.843 NVM Command Set: Supported 00:15:46.843 Boot Partition: Not Supported 00:15:46.843 Memory Page Size Minimum: 4096 bytes 00:15:46.843 Memory Page Size Maximum: 4096 bytes 00:15:46.843 Persistent Memory Region: Not Supported 00:15:46.843 Optional Asynchronous Events Supported 00:15:46.843 Namespace Attribute Notices: Supported 00:15:46.843 Firmware Activation Notices: Not Supported 00:15:46.843 ANA Change Notices: Supported 00:15:46.843 PLE Aggregate Log Change Notices: Not Supported 00:15:46.843 LBA Status Info Alert Notices: Not Supported 00:15:46.843 EGE Aggregate Log Change Notices: Not Supported 00:15:46.843 Normal NVM Subsystem Shutdown event: Not Supported 00:15:46.843 Zone Descriptor Change Notices: Not Supported 00:15:46.843 Discovery Log Change Notices: Not Supported 00:15:46.843 Controller Attributes 00:15:46.843 128-bit Host Identifier: Supported 00:15:46.843 Non-Operational Permissive Mode: Not Supported 00:15:46.843 NVM Sets: Not Supported 00:15:46.843 Read Recovery Levels: Not Supported 00:15:46.843 Endurance Groups: Not Supported 00:15:46.843 Predictable Latency Mode: Not Supported 00:15:46.843 Traffic Based Keep ALive: Supported 00:15:46.843 Namespace Granularity: Not Supported 00:15:46.843 SQ Associations: Not Supported 00:15:46.843 UUID List: Not Supported 00:15:46.843 Multi-Domain Subsystem: Not Supported 00:15:46.843 Fixed Capacity Management: Not Supported 00:15:46.843 Variable Capacity Management: Not Supported 00:15:46.843 Delete Endurance Group: Not Supported 00:15:46.843 Delete NVM Set: Not Supported 00:15:46.843 Extended LBA Formats Supported: Not Supported 00:15:46.843 Flexible Data Placement Supported: Not Supported 00:15:46.843 00:15:46.843 Controller Memory Buffer Support 00:15:46.843 ================================ 00:15:46.843 Supported: No 00:15:46.843 00:15:46.843 Persistent Memory Region Support 00:15:46.843 ================================ 00:15:46.843 Supported: No 00:15:46.843 00:15:46.843 Admin Command Set Attributes 00:15:46.843 ============================ 00:15:46.843 Security Send/Receive: Not Supported 00:15:46.843 Format NVM: Not Supported 00:15:46.843 Firmware Activate/Download: Not Supported 00:15:46.843 Namespace Management: Not Supported 00:15:46.843 Device Self-Test: Not Supported 00:15:46.843 Directives: Not Supported 00:15:46.843 NVMe-MI: Not Supported 00:15:46.843 Virtualization Management: Not Supported 00:15:46.843 Doorbell Buffer Config: Not Supported 00:15:46.843 Get LBA Status Capability: Not Supported 00:15:46.843 Command & Feature Lockdown Capability: Not Supported 00:15:46.843 Abort Command Limit: 4 00:15:46.843 Async Event Request Limit: 4 00:15:46.843 Number of Firmware Slots: N/A 00:15:46.843 Firmware Slot 1 Read-Only: N/A 00:15:46.843 Firmware Activation Without Reset: N/A 00:15:46.843 Multiple Update Detection Support: N/A 00:15:46.843 Firmware Update Granularity: No Information Provided 00:15:46.843 Per-Namespace SMART Log: Yes 00:15:46.843 Asymmetric Namespace Access Log Page: Supported 00:15:46.843 ANA Transition Time : 10 sec 00:15:46.843 00:15:46.843 Asymmetric Namespace Access Capabilities 00:15:46.843 ANA Optimized State : Supported 00:15:46.843 ANA Non-Optimized State : Supported 00:15:46.843 ANA Inaccessible State : Supported 00:15:46.843 ANA Persistent Loss State : Supported 00:15:46.843 ANA Change State : Supported 00:15:46.843 ANAGRPID is not changed : No 00:15:46.843 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:46.843 00:15:46.843 ANA Group Identifier Maximum : 128 00:15:46.843 Number of ANA Group Identifiers : 128 00:15:46.843 Max Number of Allowed Namespaces : 1024 00:15:46.843 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:46.843 Command Effects Log Page: Supported 00:15:46.843 Get Log Page Extended Data: Supported 00:15:46.843 Telemetry Log Pages: Not Supported 00:15:46.843 Persistent Event Log Pages: Not Supported 00:15:46.843 Supported Log Pages Log Page: May Support 00:15:46.843 Commands Supported & Effects Log Page: Not Supported 00:15:46.843 Feature Identifiers & Effects Log Page:May Support 00:15:46.843 NVMe-MI Commands & Effects Log Page: May Support 00:15:46.843 Data Area 4 for Telemetry Log: Not Supported 00:15:46.843 Error Log Page Entries Supported: 128 00:15:46.843 Keep Alive: Supported 00:15:46.844 Keep Alive Granularity: 1000 ms 00:15:46.844 00:15:46.844 NVM Command Set Attributes 00:15:46.844 ========================== 00:15:46.844 Submission Queue Entry Size 00:15:46.844 Max: 64 00:15:46.844 Min: 64 00:15:46.844 Completion Queue Entry Size 00:15:46.844 Max: 16 00:15:46.844 Min: 16 00:15:46.844 Number of Namespaces: 1024 00:15:46.844 Compare Command: Not Supported 00:15:46.844 Write Uncorrectable Command: Not Supported 00:15:46.844 Dataset Management Command: Supported 00:15:46.844 Write Zeroes Command: Supported 00:15:46.844 Set Features Save Field: Not Supported 00:15:46.844 Reservations: Not Supported 00:15:46.844 Timestamp: Not Supported 00:15:46.844 Copy: Not Supported 00:15:46.844 Volatile Write Cache: Present 00:15:46.844 Atomic Write Unit (Normal): 1 00:15:46.844 Atomic Write Unit (PFail): 1 00:15:46.844 Atomic Compare & Write Unit: 1 00:15:46.844 Fused Compare & Write: Not Supported 00:15:46.844 Scatter-Gather List 00:15:46.844 SGL Command Set: Supported 00:15:46.844 SGL Keyed: Not Supported 00:15:46.844 SGL Bit Bucket Descriptor: Not Supported 00:15:46.844 SGL Metadata Pointer: Not Supported 00:15:46.844 Oversized SGL: Not Supported 00:15:46.844 SGL Metadata Address: Not Supported 00:15:46.844 SGL Offset: Supported 00:15:46.844 Transport SGL Data Block: Not Supported 00:15:46.844 Replay Protected Memory Block: Not Supported 00:15:46.844 00:15:46.844 Firmware Slot Information 00:15:46.844 ========================= 00:15:46.844 Active slot: 0 00:15:46.844 00:15:46.844 Asymmetric Namespace Access 00:15:46.844 =========================== 00:15:46.844 Change Count : 0 00:15:46.844 Number of ANA Group Descriptors : 1 00:15:46.844 ANA Group Descriptor : 0 00:15:46.844 ANA Group ID : 1 00:15:46.844 Number of NSID Values : 1 00:15:46.844 Change Count : 0 00:15:46.844 ANA State : 1 00:15:46.844 Namespace Identifier : 1 00:15:46.844 00:15:46.844 Commands Supported and Effects 00:15:46.844 ============================== 00:15:46.844 Admin Commands 00:15:46.844 -------------- 00:15:46.844 Get Log Page (02h): Supported 00:15:46.844 Identify (06h): Supported 00:15:46.844 Abort (08h): Supported 00:15:46.844 Set Features (09h): Supported 00:15:46.844 Get Features (0Ah): Supported 00:15:46.844 Asynchronous Event Request (0Ch): Supported 00:15:46.844 Keep Alive (18h): Supported 00:15:46.844 I/O Commands 00:15:46.844 ------------ 00:15:46.844 Flush (00h): Supported 00:15:46.844 Write (01h): Supported LBA-Change 00:15:46.844 Read (02h): Supported 00:15:46.844 Write Zeroes (08h): Supported LBA-Change 00:15:46.844 Dataset Management (09h): Supported 00:15:46.844 00:15:46.844 Error Log 00:15:46.844 ========= 00:15:46.844 Entry: 0 00:15:46.844 Error Count: 0x3 00:15:46.844 Submission Queue Id: 0x0 00:15:46.844 Command Id: 0x5 00:15:46.844 Phase Bit: 0 00:15:46.844 Status Code: 0x2 00:15:46.844 Status Code Type: 0x0 00:15:46.844 Do Not Retry: 1 00:15:46.844 Error Location: 0x28 00:15:46.844 LBA: 0x0 00:15:46.844 Namespace: 0x0 00:15:46.844 Vendor Log Page: 0x0 00:15:46.844 ----------- 00:15:46.844 Entry: 1 00:15:46.844 Error Count: 0x2 00:15:46.844 Submission Queue Id: 0x0 00:15:46.844 Command Id: 0x5 00:15:46.844 Phase Bit: 0 00:15:46.844 Status Code: 0x2 00:15:46.844 Status Code Type: 0x0 00:15:46.844 Do Not Retry: 1 00:15:46.844 Error Location: 0x28 00:15:46.844 LBA: 0x0 00:15:46.844 Namespace: 0x0 00:15:46.844 Vendor Log Page: 0x0 00:15:46.844 ----------- 00:15:46.844 Entry: 2 00:15:46.844 Error Count: 0x1 00:15:46.844 Submission Queue Id: 0x0 00:15:46.844 Command Id: 0x4 00:15:46.844 Phase Bit: 0 00:15:46.844 Status Code: 0x2 00:15:46.844 Status Code Type: 0x0 00:15:46.844 Do Not Retry: 1 00:15:46.844 Error Location: 0x28 00:15:46.844 LBA: 0x0 00:15:46.844 Namespace: 0x0 00:15:46.844 Vendor Log Page: 0x0 00:15:46.844 00:15:46.844 Number of Queues 00:15:46.844 ================ 00:15:46.844 Number of I/O Submission Queues: 128 00:15:46.844 Number of I/O Completion Queues: 128 00:15:46.844 00:15:46.844 ZNS Specific Controller Data 00:15:46.844 ============================ 00:15:46.844 Zone Append Size Limit: 0 00:15:46.844 00:15:46.844 00:15:46.844 Active Namespaces 00:15:46.844 ================= 00:15:46.844 get_feature(0x05) failed 00:15:46.844 Namespace ID:1 00:15:46.844 Command Set Identifier: NVM (00h) 00:15:46.844 Deallocate: Supported 00:15:46.844 Deallocated/Unwritten Error: Not Supported 00:15:46.844 Deallocated Read Value: Unknown 00:15:46.844 Deallocate in Write Zeroes: Not Supported 00:15:46.844 Deallocated Guard Field: 0xFFFF 00:15:46.844 Flush: Supported 00:15:46.844 Reservation: Not Supported 00:15:46.844 Namespace Sharing Capabilities: Multiple Controllers 00:15:46.844 Size (in LBAs): 1310720 (5GiB) 00:15:46.844 Capacity (in LBAs): 1310720 (5GiB) 00:15:46.844 Utilization (in LBAs): 1310720 (5GiB) 00:15:46.844 UUID: 419dd7f1-8825-474f-b5ff-1a58928ca08e 00:15:46.844 Thin Provisioning: Not Supported 00:15:46.844 Per-NS Atomic Units: Yes 00:15:46.844 Atomic Boundary Size (Normal): 0 00:15:46.844 Atomic Boundary Size (PFail): 0 00:15:46.844 Atomic Boundary Offset: 0 00:15:46.844 NGUID/EUI64 Never Reused: No 00:15:46.844 ANA group ID: 1 00:15:46.844 Namespace Write Protected: No 00:15:46.844 Number of LBA Formats: 1 00:15:46.844 Current LBA Format: LBA Format #00 00:15:46.844 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:46.844 00:15:46.844 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:46.844 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:46.844 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.104 rmmod nvme_tcp 00:15:47.104 rmmod nvme_fabrics 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.104 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:47.364 12:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:47.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:48.192 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.192 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.192 00:15:48.192 real 0m3.265s 00:15:48.192 user 0m1.157s 00:15:48.192 sys 0m1.501s 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.192 ************************************ 00:15:48.192 END TEST nvmf_identify_kernel_target 00:15:48.192 ************************************ 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.192 ************************************ 00:15:48.192 START TEST nvmf_auth_host 00:15:48.192 ************************************ 00:15:48.192 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:48.453 * Looking for test storage... 00:15:48.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.453 --rc genhtml_branch_coverage=1 00:15:48.453 --rc genhtml_function_coverage=1 00:15:48.453 --rc genhtml_legend=1 00:15:48.453 --rc geninfo_all_blocks=1 00:15:48.453 --rc geninfo_unexecuted_blocks=1 00:15:48.453 00:15:48.453 ' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.453 --rc genhtml_branch_coverage=1 00:15:48.453 --rc genhtml_function_coverage=1 00:15:48.453 --rc genhtml_legend=1 00:15:48.453 --rc geninfo_all_blocks=1 00:15:48.453 --rc geninfo_unexecuted_blocks=1 00:15:48.453 00:15:48.453 ' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.453 --rc genhtml_branch_coverage=1 00:15:48.453 --rc genhtml_function_coverage=1 00:15:48.453 --rc genhtml_legend=1 00:15:48.453 --rc geninfo_all_blocks=1 00:15:48.453 --rc geninfo_unexecuted_blocks=1 00:15:48.453 00:15:48.453 ' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.453 --rc genhtml_branch_coverage=1 00:15:48.453 --rc genhtml_function_coverage=1 00:15:48.453 --rc genhtml_legend=1 00:15:48.453 --rc geninfo_all_blocks=1 00:15:48.453 --rc geninfo_unexecuted_blocks=1 00:15:48.453 00:15:48.453 ' 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.453 12:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.453 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.454 Cannot find device "nvmf_init_br" 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.454 Cannot find device "nvmf_init_br2" 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.454 Cannot find device "nvmf_tgt_br" 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.454 Cannot find device "nvmf_tgt_br2" 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.454 Cannot find device "nvmf_init_br" 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.454 Cannot find device "nvmf_init_br2" 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:15:48.454 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.713 Cannot find device "nvmf_tgt_br" 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.713 Cannot find device "nvmf_tgt_br2" 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.713 Cannot find device "nvmf_br" 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.713 Cannot find device "nvmf_init_if" 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.713 Cannot find device "nvmf_init_if2" 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:48.713 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:48.972 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.972 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.169 ms 00:15:48.972 00:15:48.972 --- 10.0.0.3 ping statistics --- 00:15:48.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.972 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:48.972 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:48.972 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:15:48.972 00:15:48.972 --- 10.0.0.4 ping statistics --- 00:15:48.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.972 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:48.972 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:48.972 00:15:48.972 --- 10.0.0.1 ping statistics --- 00:15:48.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.973 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:48.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:15:48.973 00:15:48.973 --- 10.0.0.2 ping statistics --- 00:15:48.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.973 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=77685 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 77685 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77685 ']' 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.973 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.231 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=53441eb7ea43116fc372e8b501692db7 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RHQ 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 53441eb7ea43116fc372e8b501692db7 0 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 53441eb7ea43116fc372e8b501692db7 0 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=53441eb7ea43116fc372e8b501692db7 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:49.232 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RHQ 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RHQ 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RHQ 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8c6c7e58e6d93639c224c7df433c3b8bbd2b3276f445400dfa3c779af97146c 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9ze 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8c6c7e58e6d93639c224c7df433c3b8bbd2b3276f445400dfa3c779af97146c 3 00:15:49.490 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8c6c7e58e6d93639c224c7df433c3b8bbd2b3276f445400dfa3c779af97146c 3 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8c6c7e58e6d93639c224c7df433c3b8bbd2b3276f445400dfa3c779af97146c 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9ze 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9ze 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.9ze 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=722fa2f2f4f107042f44a03444b6eff81d410731cc46dd61 00:15:49.491 12:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.i8D 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 722fa2f2f4f107042f44a03444b6eff81d410731cc46dd61 0 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 722fa2f2f4f107042f44a03444b6eff81d410731cc46dd61 0 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=722fa2f2f4f107042f44a03444b6eff81d410731cc46dd61 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.i8D 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.i8D 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.i8D 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e55fd1338ed5c76be12e77551fb215cd46314a7d02fe71cb 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HfH 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e55fd1338ed5c76be12e77551fb215cd46314a7d02fe71cb 2 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e55fd1338ed5c76be12e77551fb215cd46314a7d02fe71cb 2 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e55fd1338ed5c76be12e77551fb215cd46314a7d02fe71cb 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HfH 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HfH 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.HfH 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aee8330ab88bbd3ede660c706040ce16 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cW0 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aee8330ab88bbd3ede660c706040ce16 1 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aee8330ab88bbd3ede660c706040ce16 1 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aee8330ab88bbd3ede660c706040ce16 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:49.491 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cW0 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cW0 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cW0 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a78c2bd916a29a45bed06e057d1e407a 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.R28 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a78c2bd916a29a45bed06e057d1e407a 1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a78c2bd916a29a45bed06e057d1e407a 1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a78c2bd916a29a45bed06e057d1e407a 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.R28 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.R28 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.R28 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7caa73b8ce878eb397654a6ceb211a826015b00a3c0e3a5b 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.H5P 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7caa73b8ce878eb397654a6ceb211a826015b00a3c0e3a5b 2 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7caa73b8ce878eb397654a6ceb211a826015b00a3c0e3a5b 2 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7caa73b8ce878eb397654a6ceb211a826015b00a3c0e3a5b 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.H5P 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.H5P 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.H5P 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6003ca89ae8eb0f5507d57fdf3aef4b3 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Paa 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6003ca89ae8eb0f5507d57fdf3aef4b3 0 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6003ca89ae8eb0f5507d57fdf3aef4b3 0 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6003ca89ae8eb0f5507d57fdf3aef4b3 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Paa 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Paa 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Paa 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d3a14a10eb7ec39060350f18fc68fc9d1729f447b561829fbbdc454eaf26486 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jNF 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d3a14a10eb7ec39060350f18fc68fc9d1729f447b561829fbbdc454eaf26486 3 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d3a14a10eb7ec39060350f18fc68fc9d1729f447b561829fbbdc454eaf26486 3 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d3a14a10eb7ec39060350f18fc68fc9d1729f447b561829fbbdc454eaf26486 00:15:49.750 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:49.751 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jNF 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jNF 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jNF 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77685 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77685 ']' 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.009 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RHQ 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.9ze ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ze 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.i8D 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.HfH ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HfH 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cW0 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.R28 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.R28 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.H5P 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Paa ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Paa 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jNF 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:50.268 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:50.269 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:50.269 12:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.836 Waiting for block devices as requested 00:15:50.836 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:50.836 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:51.405 No valid GPT data, bailing 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:51.405 12:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:51.405 No valid GPT data, bailing 00:15:51.405 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:51.664 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:51.665 No valid GPT data, bailing 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:51.665 No valid GPT data, bailing 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -a 10.0.0.1 -t tcp -s 4420 00:15:51.665 00:15:51.665 Discovery Log Number of Records 2, Generation counter 2 00:15:51.665 =====Discovery Log Entry 0====== 00:15:51.665 trtype: tcp 00:15:51.665 adrfam: ipv4 00:15:51.665 subtype: current discovery subsystem 00:15:51.665 treq: not specified, sq flow control disable supported 00:15:51.665 portid: 1 00:15:51.665 trsvcid: 4420 00:15:51.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:51.665 traddr: 10.0.0.1 00:15:51.665 eflags: none 00:15:51.665 sectype: none 00:15:51.665 =====Discovery Log Entry 1====== 00:15:51.665 trtype: tcp 00:15:51.665 adrfam: ipv4 00:15:51.665 subtype: nvme subsystem 00:15:51.665 treq: not specified, sq flow control disable supported 00:15:51.665 portid: 1 00:15:51.665 trsvcid: 4420 00:15:51.665 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:51.665 traddr: 10.0.0.1 00:15:51.665 eflags: none 00:15:51.665 sectype: none 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.665 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.925 nvme0n1 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.925 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.183 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.184 nvme0n1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.184 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 nvme0n1 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.443 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.444 12:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.444 nvme0n1 00:15:52.444 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.444 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.444 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.444 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.444 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.444 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.703 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 nvme0n1 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.964 nvme0n1 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:52.964 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.223 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.482 nvme0n1 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.482 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:53.483 12:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.483 nvme0n1 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.483 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 nvme0n1 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 nvme0n1 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.000 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.258 nvme0n1 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:54.258 12:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.825 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.083 nvme0n1 00:15:55.083 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.083 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.083 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.083 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.083 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.083 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.084 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 nvme0n1 00:15:55.342 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.343 12:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.602 nvme0n1 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.602 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.861 nvme0n1 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:55.861 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:55.862 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:55.862 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.862 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.119 nvme0n1 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:56.119 12:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.495 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 nvme0n1 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.063 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.064 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.324 nvme0n1 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.324 12:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 nvme0n1 00:15:58.583 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.583 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:58.583 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:58.583 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.584 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.584 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.842 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.100 nvme0n1 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.100 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.101 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.359 nvme0n1 00:15:59.359 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.359 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.359 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.359 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.359 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.359 12:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:59.636 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.637 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.911 nvme0n1 00:15:59.911 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.911 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:59.911 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:59.911 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.911 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.171 12:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.740 nvme0n1 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.740 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.741 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.310 nvme0n1 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.310 12:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.879 nvme0n1 00:16:01.879 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.879 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.879 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.879 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:01.879 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.879 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.880 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.448 nvme0n1 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.448 12:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.448 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.449 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.708 nvme0n1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 nvme0n1 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.709 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 nvme0n1 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:02.969 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.970 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 nvme0n1 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 nvme0n1 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.229 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.489 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 12:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 nvme0n1 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.749 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 nvme0n1 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.750 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 nvme0n1 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 nvme0n1 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.008 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 nvme0n1 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.267 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.268 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.527 12:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.527 nvme0n1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.527 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.786 nvme0n1 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.786 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.045 nvme0n1 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.045 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 nvme0n1 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.305 12:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.565 nvme0n1 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.565 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.825 nvme0n1 00:16:05.825 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.825 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.825 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.825 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.825 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:06.084 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.085 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.344 nvme0n1 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:06.344 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.345 12:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.604 nvme0n1 00:16:06.604 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.604 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.604 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.604 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.604 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.604 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.864 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.123 nvme0n1 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.123 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.124 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.382 nvme0n1 00:16:07.382 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.382 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.382 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.382 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.382 12:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.382 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.641 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 nvme0n1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.211 12:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.780 nvme0n1 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.780 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.345 nvme0n1 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:09.345 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.346 12:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.912 nvme0n1 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.912 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.480 nvme0n1 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:10.480 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.480 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.481 nvme0n1 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.481 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.740 nvme0n1 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.740 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.741 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 nvme0n1 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 nvme0n1 00:16:10.999 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 nvme0n1 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.257 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.516 12:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 nvme0n1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.516 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.774 nvme0n1 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:11.774 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.775 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.033 nvme0n1 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.033 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.034 nvme0n1 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.034 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.293 nvme0n1 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.293 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.552 nvme0n1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.552 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.812 nvme0n1 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.812 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.071 nvme0n1 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.071 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.072 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.332 nvme0n1 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:13.332 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:13.333 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:13.333 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.333 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 nvme0n1 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.592 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.159 nvme0n1 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:14.159 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.160 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.419 nvme0n1 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.419 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.676 nvme0n1 00:16:14.676 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.676 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.676 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.676 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.676 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.676 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.934 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.191 nvme0n1 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:15.191 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.192 12:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.449 nvme0n1 00:16:15.449 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.449 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.449 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.449 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.449 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.449 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.706 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.706 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.706 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0NDFlYjdlYTQzMTE2ZmMzNzJlOGI1MDE2OTJkYjf7shs8: 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: ]] 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZThjNmM3ZTU4ZTZkOTM2MzljMjI0YzdkZjQzM2MzYjhiYmQyYjMyNzZmNDQ1NDAwZGZhM2M3NzlhZjk3MTQ2Y6HJHgM=: 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.707 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 nvme0n1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.271 12:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 nvme0n1 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.837 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.405 nvme0n1 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NhYTczYjhjZTg3OGViMzk3NjU0YTZjZWIyMTFhODI2MDE1YjAwYTNjMGUzYTViOwqUCA==: 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAwM2NhODlhZThlYjBmNTUwN2Q1N2ZkZjNhZWY0YjN+wjVa: 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.405 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 nvme0n1 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQzYTE0YTEwZWI3ZWMzOTA2MDM1MGYxOGZjNjhmYzlkMTcyOWY0NDdiNTYxODI5ZmJiZGM0NTRlYWYyNjQ4Nrk5NZ0=: 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.973 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 nvme0n1 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.541 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 request: 00:16:18.800 { 00:16:18.800 "name": "nvme0", 00:16:18.800 "trtype": "tcp", 00:16:18.800 "traddr": "10.0.0.1", 00:16:18.800 "adrfam": "ipv4", 00:16:18.800 "trsvcid": "4420", 00:16:18.800 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:18.800 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:18.800 "prchk_reftag": false, 00:16:18.800 "prchk_guard": false, 00:16:18.800 "hdgst": false, 00:16:18.800 "ddgst": false, 00:16:18.800 "allow_unrecognized_csi": false, 00:16:18.800 "method": "bdev_nvme_attach_controller", 00:16:18.800 "req_id": 1 00:16:18.800 } 00:16:18.800 Got JSON-RPC error response 00:16:18.800 response: 00:16:18.800 { 00:16:18.800 "code": -5, 00:16:18.800 "message": "Input/output error" 00:16:18.800 } 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 request: 00:16:18.800 { 00:16:18.800 "name": "nvme0", 00:16:18.800 "trtype": "tcp", 00:16:18.800 "traddr": "10.0.0.1", 00:16:18.800 "adrfam": "ipv4", 00:16:18.800 "trsvcid": "4420", 00:16:18.800 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:18.800 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:18.800 "prchk_reftag": false, 00:16:18.800 "prchk_guard": false, 00:16:18.800 "hdgst": false, 00:16:18.800 "ddgst": false, 00:16:18.800 "dhchap_key": "key2", 00:16:18.800 "allow_unrecognized_csi": false, 00:16:18.800 "method": "bdev_nvme_attach_controller", 00:16:18.800 "req_id": 1 00:16:18.800 } 00:16:18.800 Got JSON-RPC error response 00:16:18.800 response: 00:16:18.800 { 00:16:18.800 "code": -5, 00:16:18.800 "message": "Input/output error" 00:16:18.800 } 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:18.800 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.801 request: 00:16:18.801 { 00:16:18.801 "name": "nvme0", 00:16:18.801 "trtype": "tcp", 00:16:18.801 "traddr": "10.0.0.1", 00:16:18.801 "adrfam": "ipv4", 00:16:18.801 "trsvcid": "4420", 00:16:18.801 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:18.801 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:18.801 "prchk_reftag": false, 00:16:18.801 "prchk_guard": false, 00:16:18.801 "hdgst": false, 00:16:18.801 "ddgst": false, 00:16:18.801 "dhchap_key": "key1", 00:16:18.801 "dhchap_ctrlr_key": "ckey2", 00:16:18.801 "allow_unrecognized_csi": false, 00:16:18.801 "method": "bdev_nvme_attach_controller", 00:16:18.801 "req_id": 1 00:16:18.801 } 00:16:18.801 Got JSON-RPC error response 00:16:18.801 response: 00:16:18.801 { 00:16:18.801 "code": -5, 00:16:18.801 "message": "Input/output error" 00:16:18.801 } 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.801 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.058 nvme0n1 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.058 request: 00:16:19.058 { 00:16:19.058 "name": "nvme0", 00:16:19.058 "dhchap_key": "key1", 00:16:19.058 "dhchap_ctrlr_key": "ckey2", 00:16:19.058 "method": "bdev_nvme_set_keys", 00:16:19.058 "req_id": 1 00:16:19.058 } 00:16:19.058 Got JSON-RPC error response 00:16:19.058 response: 00:16:19.058 { 00:16:19.058 "code": -13, 00:16:19.058 "message": "Permission denied" 00:16:19.058 } 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:16:19.058 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZmEyZjJmNGYxMDcwNDJmNDRhMDM0NDRiNmVmZjgxZDQxMDczMWNjNDZkZDYx26K1wg==: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1ZmQxMzM4ZWQ1Yzc2YmUxMmU3NzU1MWZiMjE1Y2Q0NjMxNGE3ZDAyZmU3MWNiHOs2Fg==: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 nvme0n1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVlODMzMGFiODhiYmQzZWRlNjYwYzcwNjA0MGNlMTaKuyFr: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: ]] 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc4YzJiZDkxNmEyOWE0NWJlZDA2ZTA1N2QxZTQwN2G5tIi7: 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.431 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 request: 00:16:20.431 { 00:16:20.431 "name": "nvme0", 00:16:20.431 "dhchap_key": "key2", 00:16:20.431 "dhchap_ctrlr_key": "ckey1", 00:16:20.431 "method": "bdev_nvme_set_keys", 00:16:20.431 "req_id": 1 00:16:20.431 } 00:16:20.432 Got JSON-RPC error response 00:16:20.432 response: 00:16:20.432 { 00:16:20.432 "code": -13, 00:16:20.432 "message": "Permission denied" 00:16:20.432 } 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:16:20.432 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:21.369 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:16:21.369 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:21.369 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:16:21.369 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.369 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:21.369 rmmod nvme_tcp 00:16:21.628 rmmod nvme_fabrics 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 77685 ']' 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 77685 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 77685 ']' 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 77685 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77685 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.629 killing process with pid 77685 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77685' 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 77685 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 77685 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:21.629 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:21.888 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:21.889 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:21.889 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:21.889 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:22.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:22.827 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:22.827 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:22.827 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RHQ /tmp/spdk.key-null.i8D /tmp/spdk.key-sha256.cW0 /tmp/spdk.key-sha384.H5P /tmp/spdk.key-sha512.jNF /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:22.827 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:23.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.418 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:23.418 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:23.418 00:16:23.418 real 0m35.019s 00:16:23.418 user 0m32.608s 00:16:23.418 sys 0m3.809s 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.418 ************************************ 00:16:23.418 END TEST nvmf_auth_host 00:16:23.418 ************************************ 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.418 ************************************ 00:16:23.418 START TEST nvmf_digest 00:16:23.418 ************************************ 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:23.418 * Looking for test storage... 00:16:23.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:16:23.418 12:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.418 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.678 --rc genhtml_branch_coverage=1 00:16:23.678 --rc genhtml_function_coverage=1 00:16:23.678 --rc genhtml_legend=1 00:16:23.678 --rc geninfo_all_blocks=1 00:16:23.678 --rc geninfo_unexecuted_blocks=1 00:16:23.678 00:16:23.678 ' 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.678 --rc genhtml_branch_coverage=1 00:16:23.678 --rc genhtml_function_coverage=1 00:16:23.678 --rc genhtml_legend=1 00:16:23.678 --rc geninfo_all_blocks=1 00:16:23.678 --rc geninfo_unexecuted_blocks=1 00:16:23.678 00:16:23.678 ' 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.678 --rc genhtml_branch_coverage=1 00:16:23.678 --rc genhtml_function_coverage=1 00:16:23.678 --rc genhtml_legend=1 00:16:23.678 --rc geninfo_all_blocks=1 00:16:23.678 --rc geninfo_unexecuted_blocks=1 00:16:23.678 00:16:23.678 ' 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.678 --rc genhtml_branch_coverage=1 00:16:23.678 --rc genhtml_function_coverage=1 00:16:23.678 --rc genhtml_legend=1 00:16:23.678 --rc geninfo_all_blocks=1 00:16:23.678 --rc geninfo_unexecuted_blocks=1 00:16:23.678 00:16:23.678 ' 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.678 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.679 Cannot find device "nvmf_init_br" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.679 Cannot find device "nvmf_init_br2" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.679 Cannot find device "nvmf_tgt_br" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.679 Cannot find device "nvmf_tgt_br2" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.679 Cannot find device "nvmf_init_br" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.679 Cannot find device "nvmf_init_br2" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.679 Cannot find device "nvmf_tgt_br" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.679 Cannot find device "nvmf_tgt_br2" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.679 Cannot find device "nvmf_br" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.679 Cannot find device "nvmf_init_if" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.679 Cannot find device "nvmf_init_if2" 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.679 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.938 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:16:23.939 00:16:23.939 --- 10.0.0.3 ping statistics --- 00:16:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.939 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.939 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.939 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:23.939 00:16:23.939 --- 10.0.0.4 ping statistics --- 00:16:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.939 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:23.939 00:16:23.939 --- 10.0.0.1 ping statistics --- 00:16:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.939 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:16:23.939 00:16:23.939 --- 10.0.0.2 ping statistics --- 00:16:23.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.939 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 ************************************ 00:16:23.939 START TEST nvmf_digest_clean 00:16:23.939 ************************************ 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79314 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79314 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79314 ']' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.939 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.199 [2024-12-06 12:24:10.604817] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:24.199 [2024-12-06 12:24:10.605065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.199 [2024-12-06 12:24:10.757955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.199 [2024-12-06 12:24:10.796027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.199 [2024-12-06 12:24:10.796377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.199 [2024-12-06 12:24:10.796643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.199 [2024-12-06 12:24:10.796795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.199 [2024-12-06 12:24:10.796896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.199 [2024-12-06 12:24:10.797420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.199 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.199 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:24.199 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.199 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.199 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.458 [2024-12-06 12:24:10.897573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.458 null0 00:16:24.458 [2024-12-06 12:24:10.935679] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.458 [2024-12-06 12:24:10.959815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:24.458 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79334 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79334 /var/tmp/bperf.sock 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79334 ']' 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:24.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.459 12:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 [2024-12-06 12:24:11.023842] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:24.459 [2024-12-06 12:24:11.024104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79334 ] 00:16:24.718 [2024-12-06 12:24:11.171594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.718 [2024-12-06 12:24:11.200274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.718 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.718 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:24.718 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:24.718 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:24.718 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:24.977 [2024-12-06 12:24:11.583328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.977 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:24.977 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:25.545 nvme0n1 00:16:25.545 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:25.545 12:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:25.545 Running I/O for 2 seconds... 00:16:27.419 17780.00 IOPS, 69.45 MiB/s [2024-12-06T12:24:14.077Z] 17780.00 IOPS, 69.45 MiB/s 00:16:27.419 Latency(us) 00:16:27.419 [2024-12-06T12:24:14.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.419 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:27.419 nvme0n1 : 2.01 17806.71 69.56 0.00 0.00 7182.87 6642.97 17277.67 00:16:27.419 [2024-12-06T12:24:14.077Z] =================================================================================================================== 00:16:27.419 [2024-12-06T12:24:14.077Z] Total : 17806.71 69.56 0.00 0.00 7182.87 6642.97 17277.67 00:16:27.419 { 00:16:27.419 "results": [ 00:16:27.419 { 00:16:27.419 "job": "nvme0n1", 00:16:27.419 "core_mask": "0x2", 00:16:27.419 "workload": "randread", 00:16:27.419 "status": "finished", 00:16:27.419 "queue_depth": 128, 00:16:27.419 "io_size": 4096, 00:16:27.419 "runtime": 2.01132, 00:16:27.419 "iops": 17806.71399876698, 00:16:27.419 "mibps": 69.55747655768351, 00:16:27.419 "io_failed": 0, 00:16:27.419 "io_timeout": 0, 00:16:27.419 "avg_latency_us": 7182.873855697841, 00:16:27.419 "min_latency_us": 6642.967272727273, 00:16:27.419 "max_latency_us": 17277.672727272726 00:16:27.419 } 00:16:27.419 ], 00:16:27.419 "core_count": 1 00:16:27.419 } 00:16:27.678 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:27.678 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:27.678 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:27.678 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:27.678 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:27.678 | select(.opcode=="crc32c") 00:16:27.678 | "\(.module_name) \(.executed)"' 00:16:27.937 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:27.937 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:27.937 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:27.937 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79334 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79334 ']' 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79334 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79334 00:16:27.938 killing process with pid 79334 00:16:27.938 Received shutdown signal, test time was about 2.000000 seconds 00:16:27.938 00:16:27.938 Latency(us) 00:16:27.938 [2024-12-06T12:24:14.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.938 [2024-12-06T12:24:14.596Z] =================================================================================================================== 00:16:27.938 [2024-12-06T12:24:14.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79334' 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79334 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79334 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79387 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79387 /var/tmp/bperf.sock 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79387 ']' 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:27.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.938 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:27.938 [2024-12-06 12:24:14.574605] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:27.938 [2024-12-06 12:24:14.574862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79387 ] 00:16:27.938 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:27.938 Zero copy mechanism will not be used. 00:16:28.197 [2024-12-06 12:24:14.717243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.197 [2024-12-06 12:24:14.745498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.197 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.197 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:28.197 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:28.197 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:28.197 12:24:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:28.764 [2024-12-06 12:24:15.128013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.764 12:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:28.764 12:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:29.022 nvme0n1 00:16:29.022 12:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:29.022 12:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:29.022 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:29.022 Zero copy mechanism will not be used. 00:16:29.022 Running I/O for 2 seconds... 00:16:31.331 8832.00 IOPS, 1104.00 MiB/s [2024-12-06T12:24:17.989Z] 8904.00 IOPS, 1113.00 MiB/s 00:16:31.331 Latency(us) 00:16:31.331 [2024-12-06T12:24:17.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.331 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:31.331 nvme0n1 : 2.00 8900.78 1112.60 0.00 0.00 1794.66 1593.72 11021.96 00:16:31.331 [2024-12-06T12:24:17.989Z] =================================================================================================================== 00:16:31.331 [2024-12-06T12:24:17.989Z] Total : 8900.78 1112.60 0.00 0.00 1794.66 1593.72 11021.96 00:16:31.331 { 00:16:31.331 "results": [ 00:16:31.331 { 00:16:31.331 "job": "nvme0n1", 00:16:31.331 "core_mask": "0x2", 00:16:31.331 "workload": "randread", 00:16:31.331 "status": "finished", 00:16:31.331 "queue_depth": 16, 00:16:31.331 "io_size": 131072, 00:16:31.331 "runtime": 2.002521, 00:16:31.331 "iops": 8900.780566096435, 00:16:31.331 "mibps": 1112.5975707620544, 00:16:31.331 "io_failed": 0, 00:16:31.331 "io_timeout": 0, 00:16:31.331 "avg_latency_us": 1794.6648735106905, 00:16:31.331 "min_latency_us": 1593.7163636363637, 00:16:31.331 "max_latency_us": 11021.963636363636 00:16:31.331 } 00:16:31.331 ], 00:16:31.331 "core_count": 1 00:16:31.331 } 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:31.331 | select(.opcode=="crc32c") 00:16:31.331 | "\(.module_name) \(.executed)"' 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79387 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79387 ']' 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79387 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79387 00:16:31.331 killing process with pid 79387 00:16:31.331 Received shutdown signal, test time was about 2.000000 seconds 00:16:31.331 00:16:31.331 Latency(us) 00:16:31.331 [2024-12-06T12:24:17.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.331 [2024-12-06T12:24:17.989Z] =================================================================================================================== 00:16:31.331 [2024-12-06T12:24:17.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79387' 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79387 00:16:31.331 12:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79387 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79434 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79434 /var/tmp/bperf.sock 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79434 ']' 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:31.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.590 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:31.590 [2024-12-06 12:24:18.080284] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:31.590 [2024-12-06 12:24:18.080570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79434 ] 00:16:31.590 [2024-12-06 12:24:18.224726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.849 [2024-12-06 12:24:18.253283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.849 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.849 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:31.849 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:31.849 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:31.849 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:32.108 [2024-12-06 12:24:18.595582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.108 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:32.108 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:32.367 nvme0n1 00:16:32.367 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:32.367 12:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:32.625 Running I/O for 2 seconds... 00:16:34.496 19051.00 IOPS, 74.42 MiB/s [2024-12-06T12:24:21.154Z] 19241.00 IOPS, 75.16 MiB/s 00:16:34.496 Latency(us) 00:16:34.496 [2024-12-06T12:24:21.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.496 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.496 nvme0n1 : 2.01 19283.62 75.33 0.00 0.00 6632.31 6106.76 15728.64 00:16:34.496 [2024-12-06T12:24:21.154Z] =================================================================================================================== 00:16:34.496 [2024-12-06T12:24:21.154Z] Total : 19283.62 75.33 0.00 0.00 6632.31 6106.76 15728.64 00:16:34.496 { 00:16:34.496 "results": [ 00:16:34.496 { 00:16:34.496 "job": "nvme0n1", 00:16:34.496 "core_mask": "0x2", 00:16:34.496 "workload": "randwrite", 00:16:34.496 "status": "finished", 00:16:34.496 "queue_depth": 128, 00:16:34.496 "io_size": 4096, 00:16:34.496 "runtime": 2.008803, 00:16:34.496 "iops": 19283.623132781064, 00:16:34.496 "mibps": 75.32665286242603, 00:16:34.496 "io_failed": 0, 00:16:34.496 "io_timeout": 0, 00:16:34.496 "avg_latency_us": 6632.309580832983, 00:16:34.496 "min_latency_us": 6106.763636363637, 00:16:34.496 "max_latency_us": 15728.64 00:16:34.496 } 00:16:34.496 ], 00:16:34.496 "core_count": 1 00:16:34.496 } 00:16:34.496 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:34.496 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:34.496 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:34.496 | select(.opcode=="crc32c") 00:16:34.496 | "\(.module_name) \(.executed)"' 00:16:34.496 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:34.496 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79434 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79434 ']' 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79434 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79434 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79434' 00:16:34.755 killing process with pid 79434 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79434 00:16:34.755 Received shutdown signal, test time was about 2.000000 seconds 00:16:34.755 00:16:34.755 Latency(us) 00:16:34.755 [2024-12-06T12:24:21.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.755 [2024-12-06T12:24:21.413Z] =================================================================================================================== 00:16:34.755 [2024-12-06T12:24:21.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.755 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79434 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79486 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79486 /var/tmp/bperf.sock 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79486 ']' 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:35.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.014 12:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:35.014 [2024-12-06 12:24:21.546812] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:35.014 [2024-12-06 12:24:21.547086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:35.014 Zero copy mechanism will not be used. 00:16:35.014 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79486 ] 00:16:35.273 [2024-12-06 12:24:21.691892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.273 [2024-12-06 12:24:21.720318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.840 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.840 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:35.840 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:35.840 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:35.840 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:36.099 [2024-12-06 12:24:22.694086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.099 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:36.099 12:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:36.359 nvme0n1 00:16:36.617 12:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:36.617 12:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:36.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:36.617 Zero copy mechanism will not be used. 00:16:36.617 Running I/O for 2 seconds... 00:16:38.491 7370.00 IOPS, 921.25 MiB/s [2024-12-06T12:24:25.407Z] 7372.50 IOPS, 921.56 MiB/s 00:16:38.749 Latency(us) 00:16:38.749 [2024-12-06T12:24:25.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.749 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:38.749 nvme0n1 : 2.00 7368.29 921.04 0.00 0.00 2166.34 1623.51 4557.73 00:16:38.749 [2024-12-06T12:24:25.407Z] =================================================================================================================== 00:16:38.749 [2024-12-06T12:24:25.407Z] Total : 7368.29 921.04 0.00 0.00 2166.34 1623.51 4557.73 00:16:38.749 { 00:16:38.749 "results": [ 00:16:38.749 { 00:16:38.749 "job": "nvme0n1", 00:16:38.749 "core_mask": "0x2", 00:16:38.749 "workload": "randwrite", 00:16:38.749 "status": "finished", 00:16:38.749 "queue_depth": 16, 00:16:38.749 "io_size": 131072, 00:16:38.749 "runtime": 2.004263, 00:16:38.749 "iops": 7368.294480315208, 00:16:38.749 "mibps": 921.036810039401, 00:16:38.749 "io_failed": 0, 00:16:38.749 "io_timeout": 0, 00:16:38.749 "avg_latency_us": 2166.33942283069, 00:16:38.749 "min_latency_us": 1623.5054545454545, 00:16:38.749 "max_latency_us": 4557.730909090909 00:16:38.749 } 00:16:38.749 ], 00:16:38.749 "core_count": 1 00:16:38.749 } 00:16:38.749 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:38.749 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:38.749 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:38.749 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:38.749 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:38.749 | select(.opcode=="crc32c") 00:16:38.749 | "\(.module_name) \(.executed)"' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79486 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79486 ']' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79486 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79486 00:16:39.008 killing process with pid 79486 00:16:39.008 Received shutdown signal, test time was about 2.000000 seconds 00:16:39.008 00:16:39.008 Latency(us) 00:16:39.008 [2024-12-06T12:24:25.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.008 [2024-12-06T12:24:25.666Z] =================================================================================================================== 00:16:39.008 [2024-12-06T12:24:25.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79486' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79486 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79486 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79314 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79314 ']' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79314 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.008 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79314 00:16:39.268 killing process with pid 79314 00:16:39.268 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.268 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.268 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79314' 00:16:39.268 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79314 00:16:39.268 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79314 00:16:39.268 ************************************ 00:16:39.268 END TEST nvmf_digest_clean 00:16:39.268 ************************************ 00:16:39.268 00:16:39.268 real 0m15.252s 00:16:39.268 user 0m29.985s 00:16:39.268 sys 0m4.237s 00:16:39.268 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:39.269 ************************************ 00:16:39.269 START TEST nvmf_digest_error 00:16:39.269 ************************************ 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79571 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79571 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79571 ']' 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.269 12:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:39.269 [2024-12-06 12:24:25.905444] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:39.269 [2024-12-06 12:24:25.905700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.527 [2024-12-06 12:24:26.050697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.527 [2024-12-06 12:24:26.077188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.527 [2024-12-06 12:24:26.077236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.527 [2024-12-06 12:24:26.077262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.527 [2024-12-06 12:24:26.077269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.527 [2024-12-06 12:24:26.077275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.527 [2024-12-06 12:24:26.077546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.464 [2024-12-06 12:24:26.881935] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.464 [2024-12-06 12:24:26.916462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.464 null0 00:16:40.464 [2024-12-06 12:24:26.950090] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.464 [2024-12-06 12:24:26.974170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79603 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79603 /var/tmp/bperf.sock 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79603 ']' 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:40.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.464 12:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.464 [2024-12-06 12:24:27.036741] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:40.464 [2024-12-06 12:24:27.036990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79603 ] 00:16:40.723 [2024-12-06 12:24:27.189401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.724 [2024-12-06 12:24:27.227415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.724 [2024-12-06 12:24:27.254594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.724 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.724 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:40.724 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:40.724 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:40.983 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:40.983 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.983 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:40.983 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.983 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.983 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:41.242 nvme0n1 00:16:41.242 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:41.242 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.242 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:41.242 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.242 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:41.242 12:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:41.501 Running I/O for 2 seconds... 00:16:41.501 [2024-12-06 12:24:28.000488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.000548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.000562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.014436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.014470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.014498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.028662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.028696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.028725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.042523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.042572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.042600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.057067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.057102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.057130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.071109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.071145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.071174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.085155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.085195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.085223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.098916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.098949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.098977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.114025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.114245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.114263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.129257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.129436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.129453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.501 [2024-12-06 12:24:28.145116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.501 [2024-12-06 12:24:28.145148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.501 [2024-12-06 12:24:28.145160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.161888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.161921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.161934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.177171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.177212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.177224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.192199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.192386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.192402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.207115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.207446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.207577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.222680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.222859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.222978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.238142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.238335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.238453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.253378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.253563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.253692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.268923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.269104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.269246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.284471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.284671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.284797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.299902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.300100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.300351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.315789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.315991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.316116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.331054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.331288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.345626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.345820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.345961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.360336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.360547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.360653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.374865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.374899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.374928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.388941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.388975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.760 [2024-12-06 12:24:28.402948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:41.760 [2024-12-06 12:24:28.402981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.760 [2024-12-06 12:24:28.403009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.417954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.417987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.418015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.432570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.432602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.432629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.446858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.446890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.446918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.460912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.460944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.460972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.475237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.475293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.475322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.489536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.489569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.489597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.503529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.503563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.503607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.517649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.517681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.531889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.531921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.531949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.545893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.545925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.545956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.559967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.559999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.560029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.574059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.574093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.574121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.588105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.588137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.588165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.602148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.602205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.602235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.616247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.616279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.616307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.631698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.631876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.631908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.646507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.646680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.646714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.019 [2024-12-06 12:24:28.663580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.019 [2024-12-06 12:24:28.663645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.019 [2024-12-06 12:24:28.663673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.680416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.680454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.680484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.694459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.694491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.694519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.708608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.708641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.708669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.722596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.722628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.722656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.736583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.736615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.736643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.750445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.750477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.750505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.764354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.764386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.764414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.778154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.778214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.778241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.792086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.792119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.792147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.805911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.805943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.805971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.819931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.819962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.833947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.833979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.834007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.847993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.848025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.848052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.861916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.861948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.876024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.876057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.876085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.890160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.890202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.890230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.904485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.278 [2024-12-06 12:24:28.904518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.278 [2024-12-06 12:24:28.904545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.278 [2024-12-06 12:24:28.924717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.279 [2024-12-06 12:24:28.924751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.279 [2024-12-06 12:24:28.924780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:28.940113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:28.940145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:28.940173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:28.954121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:28.954153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:28.954182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:28.968271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:28.968303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:28.968331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 17332.00 IOPS, 67.70 MiB/s [2024-12-06T12:24:29.194Z] [2024-12-06 12:24:28.983842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:28.983875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:28.983903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:28.997738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:28.997919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:28.997953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:29.012027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:29.012249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:29.012267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:29.026281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:29.026458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:29.026492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:29.040430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.536 [2024-12-06 12:24:29.040463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.536 [2024-12-06 12:24:29.040492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.536 [2024-12-06 12:24:29.054431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.054464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.054493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.068882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.068915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.068943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.083039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.083072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.083100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.097273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.097306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.097335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.111396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.111569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.111587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.125736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.125770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.125799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.139781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.139813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.139841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.153840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.168020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.168052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.168080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.537 [2024-12-06 12:24:29.183430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.537 [2024-12-06 12:24:29.183632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.537 [2024-12-06 12:24:29.183678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.199042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.199076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.199104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.213389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.213422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.213449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.227521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.227726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.227758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.241788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.241822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.241850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.256086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.256120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.256148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.270254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.270287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.270314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.284490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.284523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.284552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.298535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.298569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.298597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.312894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.312931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.312960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.328685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.328750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.344642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.344675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.344704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.359888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.360055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.360087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.375172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.375424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.375442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.390388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.390553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.390600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.405589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.405767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.420820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.420855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.420883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.795 [2024-12-06 12:24:29.435873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:42.795 [2024-12-06 12:24:29.436035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.795 [2024-12-06 12:24:29.436068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.451923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.451960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.451989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.467546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.467610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.467643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.482648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.482682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.482711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.498124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.498161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.498219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.512904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.512937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.512965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.527183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.527245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.527314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.541432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.541465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.541493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.555502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.555536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.555565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.569642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.569674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.569702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.583784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.583817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.583844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.597852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.597884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.597912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.612097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.612129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.612157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.626254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.626286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.626315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.640470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.640502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.640531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.654543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.654593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.654622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.669582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.669616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.669644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.686193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.686438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.686456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.054 [2024-12-06 12:24:29.703044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.054 [2024-12-06 12:24:29.703078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.054 [2024-12-06 12:24:29.703106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.720083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.720308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.734735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.734927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.734944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.749313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.749345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.749374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.763410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.763614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.763632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.777828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.777861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.791935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.791968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.791998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.805964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.805997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.806025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.820193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.820281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.834323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.834501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.834534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.848660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.848694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.848722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.868754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.868787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.868817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.883721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.883902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.883935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.898809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.898991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.899023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.913372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.913549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.913581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.344 [2024-12-06 12:24:29.927853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.344 [2024-12-06 12:24:29.928012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.344 [2024-12-06 12:24:29.928043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.345 [2024-12-06 12:24:29.942217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.345 [2024-12-06 12:24:29.942429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.345 [2024-12-06 12:24:29.942556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.345 [2024-12-06 12:24:29.956722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.345 [2024-12-06 12:24:29.956937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.345 [2024-12-06 12:24:29.957096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.345 [2024-12-06 12:24:29.971493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154bb50) 00:16:43.345 [2024-12-06 12:24:29.971694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.345 [2024-12-06 12:24:29.971838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:43.345 17268.00 IOPS, 67.45 MiB/s 00:16:43.345 Latency(us) 00:16:43.345 [2024-12-06T12:24:30.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.345 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:43.345 nvme0n1 : 2.00 17289.51 67.54 0.00 0.00 7398.46 6642.97 28359.21 00:16:43.345 [2024-12-06T12:24:30.003Z] =================================================================================================================== 00:16:43.345 [2024-12-06T12:24:30.003Z] Total : 17289.51 67.54 0.00 0.00 7398.46 6642.97 28359.21 00:16:43.345 { 00:16:43.345 "results": [ 00:16:43.345 { 00:16:43.345 "job": "nvme0n1", 00:16:43.345 "core_mask": "0x2", 00:16:43.345 "workload": "randread", 00:16:43.345 "status": "finished", 00:16:43.345 "queue_depth": 128, 00:16:43.345 "io_size": 4096, 00:16:43.345 "runtime": 2.004915, 00:16:43.345 "iops": 17289.511026652002, 00:16:43.345 "mibps": 67.53715244785938, 00:16:43.345 "io_failed": 0, 00:16:43.345 "io_timeout": 0, 00:16:43.345 "avg_latency_us": 7398.455536579737, 00:16:43.345 "min_latency_us": 6642.967272727273, 00:16:43.345 "max_latency_us": 28359.214545454546 00:16:43.345 } 00:16:43.345 ], 00:16:43.345 "core_count": 1 00:16:43.345 } 00:16:43.603 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:43.603 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:43.603 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:43.603 | .driver_specific 00:16:43.603 | .nvme_error 00:16:43.603 | .status_code 00:16:43.603 | .command_transient_transport_error' 00:16:43.603 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79603 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79603 ']' 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79603 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79603 00:16:43.861 killing process with pid 79603 00:16:43.861 Received shutdown signal, test time was about 2.000000 seconds 00:16:43.861 00:16:43.861 Latency(us) 00:16:43.861 [2024-12-06T12:24:30.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.861 [2024-12-06T12:24:30.519Z] =================================================================================================================== 00:16:43.861 [2024-12-06T12:24:30.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79603' 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79603 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79603 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79646 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79646 /var/tmp/bperf.sock 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79646 ']' 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:43.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.861 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:43.861 [2024-12-06 12:24:30.509015] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:43.861 [2024-12-06 12:24:30.509305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79646 ] 00:16:43.861 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:43.861 Zero copy mechanism will not be used. 00:16:44.120 [2024-12-06 12:24:30.656146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.120 [2024-12-06 12:24:30.685334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.120 [2024-12-06 12:24:30.714879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:44.120 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.120 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:44.120 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:44.120 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:44.379 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:44.379 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.379 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:44.379 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.379 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.379 12:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.636 nvme0n1 00:16:44.636 12:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:44.636 12:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.636 12:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:44.636 12:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.636 12:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:44.636 12:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:44.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:44.895 Zero copy mechanism will not be used. 00:16:44.895 Running I/O for 2 seconds... 00:16:44.895 [2024-12-06 12:24:31.400897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.400958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.400971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.404815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.404850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.404879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.408808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.408842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.408871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.412665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.412698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.412727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.416512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.416546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.416574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.420404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.420436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.420465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.424207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.424266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.424295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.428027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.428089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.432018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.432051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.432080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.435902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.435936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.435964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.439787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.439820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.439848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.443803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.443836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.443865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.447711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.447745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.447774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.451685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.451718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.451747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.455444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.455478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.455507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.459130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.459369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.459387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.463235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.463290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.895 [2024-12-06 12:24:31.463319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.895 [2024-12-06 12:24:31.466982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.895 [2024-12-06 12:24:31.467176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.467222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.471025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.471228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.471261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.475101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.475326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.475344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.479258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.479365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.479378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.483087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.483333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.483351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.487029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.487214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.487231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.491096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.491355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.491374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.495913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.495946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.495973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.500408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.500440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.500468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.504312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.504344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.504372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.508066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.508099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.508127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.511970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.512002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.512031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.515925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.515958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.515986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.519856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.519888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.519916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.523822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.523855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.523883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.527736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.527768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.531552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.531587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.531630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.535408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.535445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.535475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.539319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.539355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.539367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.543059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.543280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.543314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:44.896 [2024-12-06 12:24:31.547289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:44.896 [2024-12-06 12:24:31.547347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.896 [2024-12-06 12:24:31.547361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.155 [2024-12-06 12:24:31.551578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.155 [2024-12-06 12:24:31.551645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.155 [2024-12-06 12:24:31.551689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.155 [2024-12-06 12:24:31.555571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.155 [2024-12-06 12:24:31.555638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.155 [2024-12-06 12:24:31.555682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.155 [2024-12-06 12:24:31.559528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.155 [2024-12-06 12:24:31.559564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.155 [2024-12-06 12:24:31.559608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.563380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.563415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.563445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.567103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.567367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.567385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.571159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.571201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.571230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.575012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.575216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.575234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.579107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.579336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.579355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.583231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.583286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.583317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.587092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.587302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.587320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.591222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.591254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.591306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.595060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.595246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.595304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.599134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.599365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.599383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.603300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.603336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.603365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.607047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.607214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.607248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.611115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.611342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.611376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.615203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.615235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.615271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.618998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.619215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.619234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.623102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.623345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.627146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.627377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.631216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.631248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.631300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.635043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.635246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.635320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.639102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.156 [2024-12-06 12:24:31.639365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.156 [2024-12-06 12:24:31.639383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.156 [2024-12-06 12:24:31.643148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.643367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.643396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.647302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.647346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.647362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.651088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.651333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.651351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.655081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.655325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.655358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.659210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.659242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.663071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.663314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.663348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.666990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.667164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.667197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.671062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.671254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.671295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.675251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.675312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.675342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.679481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.679519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.679534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.683682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.683715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.683744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.687854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.687886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.687929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.692166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.692228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.692242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.696252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.696296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.696325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.700420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.700455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.700483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.704597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.704630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.704659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.708785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.708819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.708847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.713108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.713144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.713173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.717484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.717535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.717564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.721969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.722006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.722035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.726667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.726830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.726863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.157 [2024-12-06 12:24:31.731048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.157 [2024-12-06 12:24:31.731083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.157 [2024-12-06 12:24:31.731111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.735386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.735427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.735441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.739905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.739940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.739968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.744180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.744254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.744269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.748126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.748161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.748218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.752097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.752131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.752160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.756150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.756223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.756238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.760139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.760199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.760228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.764173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.764248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.764262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.768104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.768139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.768168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.772039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.772083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.772094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.776072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.776106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.776135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.780064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.780097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.780125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.783995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.784029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.784059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.788016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.788050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.788078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.792032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.792066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.792094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.796041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.796077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.796105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.800023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.800057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.800085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.803969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.804003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.804031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.158 [2024-12-06 12:24:31.808358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.158 [2024-12-06 12:24:31.808392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.158 [2024-12-06 12:24:31.808420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.812616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.812650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.812693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.816614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.816648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.816677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.820705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.820739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.820767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.824796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.824830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.824858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.828754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.828787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.828816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.832759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.832794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.832822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.836838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.836873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.836901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.840885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.840919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.840949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.844892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.844926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.844954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.848923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.848958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.848986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.853011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.853046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.857207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.857241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.857270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.861027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.861235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.861253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.865116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.865322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.865339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.869196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.869230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.869259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.873267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.873461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.873478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.877546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.877580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.877608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.881733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.419 [2024-12-06 12:24:31.881767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.419 [2024-12-06 12:24:31.881796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.419 [2024-12-06 12:24:31.885666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.885698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.885727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.889616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.889648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.889677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.893389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.893421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.893449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.897190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.897221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.897250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.901165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.901402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.901435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.905318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.905351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.905379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.909106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.909329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.909347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.913268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.913317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.913346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.917044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.917245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.917262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.921080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.921270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.921287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.925141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.925326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.925359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.929184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.929216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.929245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.932995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.933214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.933232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.937042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.937076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.937105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.940949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.940981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.941009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.944811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.944844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.944872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.948716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.948750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.948778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.952609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.952643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.952671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.956447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.956479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.956508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.960150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.960358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.960391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.964274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.964471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.964489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.968373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.968405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.420 [2024-12-06 12:24:31.968434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.420 [2024-12-06 12:24:31.972123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.420 [2024-12-06 12:24:31.972331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.972364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:31.976272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:31.976304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.976333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:31.980067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:31.980294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.980313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:31.984115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:31.984338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.984355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:31.988274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:31.988308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.988336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:31.992047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:31.992269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.992292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:31.996255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:31.996287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:31.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.000055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.000254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.000272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.004173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.004397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.004414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.008261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.008295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.008324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.011975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.012167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.012216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.016147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.016350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.016367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.020067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.020283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.024895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.024945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.024973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.029486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.029518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.029546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.033323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.033355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.033384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.037062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.037096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.037124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.040967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.041000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.041029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.044884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.044917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.044946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.048848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.048881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.048909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.052915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.052948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.052976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.056749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.056782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.056810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.060688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.060720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.421 [2024-12-06 12:24:32.060749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.421 [2024-12-06 12:24:32.064704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.421 [2024-12-06 12:24:32.064737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.422 [2024-12-06 12:24:32.064765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.422 [2024-12-06 12:24:32.068502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.422 [2024-12-06 12:24:32.068536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.422 [2024-12-06 12:24:32.068564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.422 [2024-12-06 12:24:32.072699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.422 [2024-12-06 12:24:32.072731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.422 [2024-12-06 12:24:32.072759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.076850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.076883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.076911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.081060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.081094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.081123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.085156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.085215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.085245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.088952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.088984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.089012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.092865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.092898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.092926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.096778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.096811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.096839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.100678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.100711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.100739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.104507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.104541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.104569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.108397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.108429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.108458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.112138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.112380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.112398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.116104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.116132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.116160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.120001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.120237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.120360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.124397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.124612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.124732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.128701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.128903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.129029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.133075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.133556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.137561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.137765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.137897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.141773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.142112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.146037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.146274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.146390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.150380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.150580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.150707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.154625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.154823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.683 [2024-12-06 12:24:32.154943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.683 [2024-12-06 12:24:32.158882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.683 [2024-12-06 12:24:32.159086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.159217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.163383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.163560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.163705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.167607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.167835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.167941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.171945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.171979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.172008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.175759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.175794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.175822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.179562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.179611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.179639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.183359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.183394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.183407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.187071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.187317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.187336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.191095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.191344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.191363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.195080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.195287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.195321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.199065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.199292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.199326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.203040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.203258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.203302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.207027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.207227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.207245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.211066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.211293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.211311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.215119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.215380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.219150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.219348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.219366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.223114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.223361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.223380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.227229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.227288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.231081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.231324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.231358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.235119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.235362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.235380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.239205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.684 [2024-12-06 12:24:32.239238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.684 [2024-12-06 12:24:32.239291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.684 [2024-12-06 12:24:32.243014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.243233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.243251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.247083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.247343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.247363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.251160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.251384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.251402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.255250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.255330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.255344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.259021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.259255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.263144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.263369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.263388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.267200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.267233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.267261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.270995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.271216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.271234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.275031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.275248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.275293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.279132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.279345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.279393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.283195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.283228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.283257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.287177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.287209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.287237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.290965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.291141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.291158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.294974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.295149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.295192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.299006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.299205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.299222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.303049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.303247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.303289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.307081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.307289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.307322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.311060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.311262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.311335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.315200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.315230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.315242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.319056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.319222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.319255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.323083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.323332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.323351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.327095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.685 [2024-12-06 12:24:32.327316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.685 [2024-12-06 12:24:32.327351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.685 [2024-12-06 12:24:32.331119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.686 [2024-12-06 12:24:32.331375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.686 [2024-12-06 12:24:32.331393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.686 [2024-12-06 12:24:32.335719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.686 [2024-12-06 12:24:32.335754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.686 [2024-12-06 12:24:32.335784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.946 [2024-12-06 12:24:32.339931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.946 [2024-12-06 12:24:32.339965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.946 [2024-12-06 12:24:32.339993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.946 [2024-12-06 12:24:32.343945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.344009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.348031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.348064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.348093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.351974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.352007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.352035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.355897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.355930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.355958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.359820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.359853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.359881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.363827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.363860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.363889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.367749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.367781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.367809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.371564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.371657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.375462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.375496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.375526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.379200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.379231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.379259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.383010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.383247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.387009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.387207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.387225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.391002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.391199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.391217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.394970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.395144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.395160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.947 7672.00 IOPS, 959.00 MiB/s [2024-12-06T12:24:32.605Z] [2024-12-06 12:24:32.400176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.400250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.400265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.404009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.404042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.407928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.407960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.407989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.411845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.411878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.411906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.415680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.415713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.415741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.419557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.419622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.419666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.423364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.423399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.423428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.427133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.947 [2024-12-06 12:24:32.427397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.947 [2024-12-06 12:24:32.427416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.947 [2024-12-06 12:24:32.430963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.430992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.431020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.434909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.435104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.435327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.439146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.439410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.443554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.443790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.443932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.447990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.448205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.448326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.452297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.452505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.452631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.456574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.456777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.456902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.460797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.461000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.461116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.465079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.465321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.465446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.469375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.469573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.469763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.473680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.473846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.473863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.477642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.477676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.477704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.481562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.481596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.481624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.485380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.485412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.485441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.489123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.489155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.489196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.492980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.493013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.493042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.496856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.496888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.496916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.500856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.500890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.500918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.504822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.504854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.504882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.508745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.508778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.508806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.948 [2024-12-06 12:24:32.512598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.948 [2024-12-06 12:24:32.512630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.948 [2024-12-06 12:24:32.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.516615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.516646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.516674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.520534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.520582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.520611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.524414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.524445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.524473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.528279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.528312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.528340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.532078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.532110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.532139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.535976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.536008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.536036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.539909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.539942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.539970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.543782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.543815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.543844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.547907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.547939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.547968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.552557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.552615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.556953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.556984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.557013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.560807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.560838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.560867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.564802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.564836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.564865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.568622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.568654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.568683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.572483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.572515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.572544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.576238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.576271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.576299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.579975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.580007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.580035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.583865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.583896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.583925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.587746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.587778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.587807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.591552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.591587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.591600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:45.949 [2024-12-06 12:24:32.595339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.949 [2024-12-06 12:24:32.595374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.949 [2024-12-06 12:24:32.595403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.950 [2024-12-06 12:24:32.599340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:45.950 [2024-12-06 12:24:32.599374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.950 [2024-12-06 12:24:32.599420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.210 [2024-12-06 12:24:32.603454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.210 [2024-12-06 12:24:32.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.210 [2024-12-06 12:24:32.603506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.210 [2024-12-06 12:24:32.607153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.210 [2024-12-06 12:24:32.607230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.210 [2024-12-06 12:24:32.607259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.210 [2024-12-06 12:24:32.611199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.611232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.611260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.614951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.614984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.615012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.618756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.618788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.618816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.622602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.622634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.626463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.626495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.626524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.630238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.630271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.630300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.634070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.634314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.638083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.638284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.638300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.642100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.642284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.642317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.646048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.646249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.646282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.650064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.650285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.650303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.654065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.654264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.654281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.658057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.658241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.658274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.662123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.662324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.662341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.666194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.666227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.666254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.669937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.670126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.670142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.673897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.674085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.674102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.677857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.678030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.678048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.681866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.681901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.681929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.685719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.685751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.685780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.689609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.689642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.689673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.693394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.693427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.693455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.697103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.697306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.697340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.701191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.701223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.701251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.704951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.705143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.705159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.708991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.211 [2024-12-06 12:24:32.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.211 [2024-12-06 12:24:32.712813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.211 [2024-12-06 12:24:32.712846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.712874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.716712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.716745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.716773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.720566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.720599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.720627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.724413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.724445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.724473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.728750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.728799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.728827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.733025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.733060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.733089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.737413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.737451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.737464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.741725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.741759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.741787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.746252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.746300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.746315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.750649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.750682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.750710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.754822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.754855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.754883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.758927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.758960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.758988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.762994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.763027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.763055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.767053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.767086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.767114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.771136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.771212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.771242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.775121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.775154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.775211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.779092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.779124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.779152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.783134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.783209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.783238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.787054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.787115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.790930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.790963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.790991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.794750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.794782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.794810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.798592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.798624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.798652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.802380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.802412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.802440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.806094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.806126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.806155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.809934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.809967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.809996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.813836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.813869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.813898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.817712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.817745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.212 [2024-12-06 12:24:32.817774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.212 [2024-12-06 12:24:32.821534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.212 [2024-12-06 12:24:32.821566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.821594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.825441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.825474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.825502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.829287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.829318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.829346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.833168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.833244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.833257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.837137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.837195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.837209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.840888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.840920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.840948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.844648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.844681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.844708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.848434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.848466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.848495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.852268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.852327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.852355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.856129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.856352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.856369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.860256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.860289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.860317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.213 [2024-12-06 12:24:32.864460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.213 [2024-12-06 12:24:32.864491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.213 [2024-12-06 12:24:32.864520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.868545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.868578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.868606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.872740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.872775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.872818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.876819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.876853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.876882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.880969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.881003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.881031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.885069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.885103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.885131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.889343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.889394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.889423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.893685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.893721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.893750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.898163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.898274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.898291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.902622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.902655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.902683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.906787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.906821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.906850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.910874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.910908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.910937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.915007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.915041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.915070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.919236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.919290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.919319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.923111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.923349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.923368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.927383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.927420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.927433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.474 [2024-12-06 12:24:32.931161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.474 [2024-12-06 12:24:32.931205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.474 [2024-12-06 12:24:32.931233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.935200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.935421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.935440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.939425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.939473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.939488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.943353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.943389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.943402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.947060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.947278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.947311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.951169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.951391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.955207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.955240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.955291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.959197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.959230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.959259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.963213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.963441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.963459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.967352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.967389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.967403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.971369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.971406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.971419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.975151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.975368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.979243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.979315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.979328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.983045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.983233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.983275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.987229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.987263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.987314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.991087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.991258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.991315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.995226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.995260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.995327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:32.999081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:32.999294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:32.999327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.003345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.003382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.003395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.007199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.007233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.007261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.011080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.011293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.011326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.015139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.015350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.015367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.019353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.019392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.019405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.023213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.023245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.023297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.027102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.027332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.027349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.031118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.031347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.031365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.035396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.475 [2024-12-06 12:24:33.035435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.475 [2024-12-06 12:24:33.035449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.475 [2024-12-06 12:24:33.039272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.039338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.039351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.043102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.043332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.043358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.047101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.047312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.047329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.051378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.051417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.051431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.055248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.055306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.055319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.059041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.059257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.063093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.063292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.063324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.067310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.067349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.067363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.071288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.071323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.071336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.075456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.075490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.075503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.080140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.080197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.080226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.084558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.084590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.084619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.088397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.088428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.088456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.092160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.092235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.092249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.095934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.095966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.095994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.099864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.099897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.099925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.103777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.103809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.103837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.107596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.107643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.107685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.111491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.111526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.111539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.115231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.115263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.115314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.118929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.119105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.119123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.122993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.123209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.123228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.476 [2024-12-06 12:24:33.127483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.476 [2024-12-06 12:24:33.127521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.476 [2024-12-06 12:24:33.127534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.131571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.131638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.131651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.135684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.135748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.135777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.139766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.139798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.139826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.143554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.143590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.143633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.147397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.147435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.147448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.151163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.151403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.151422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.155426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.155463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.155476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.159098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.159303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.159320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.163053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.163236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.163254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.167061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.167242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.167258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.170911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.170940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.170970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.174694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.174888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.175023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.179115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.179369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.739 [2024-12-06 12:24:33.179490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.739 [2024-12-06 12:24:33.183377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.739 [2024-12-06 12:24:33.183553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.183687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.187342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.187531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.187650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.191812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.192000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.192141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.196072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.196275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.196412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.200453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.200663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.200784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.204624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.204816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.204955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.208897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.209106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.209306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.213295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.213470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.213487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.217356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.217389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.217418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.221090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.221123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.221152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.224902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.224936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.224965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.228805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.228838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.228866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.232545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.232578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.232606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.236357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.236389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.236417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.240123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.240156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.240198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.243912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.243945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.243974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.247844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.247876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.247905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.251768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.251802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.251830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.255638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.255702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.255730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.259514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.259551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.259565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.263390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.263429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.263443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.267096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.267312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.267330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.270884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.270913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.270941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.274646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.274839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.274976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.278941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.279147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.279303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.283427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.283629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.283954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.740 [2024-12-06 12:24:33.287857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.740 [2024-12-06 12:24:33.288065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.740 [2024-12-06 12:24:33.288229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.292210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.292453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.292627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.296547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.296745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.296863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.300699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.300892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.301030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.304967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.305177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.305307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.309100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.309344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.309577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.313538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.313740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.313837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.317640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.317675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.317703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.321566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.321599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.321627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.325444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.325477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.325505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.329311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.329342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.329370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.333220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.333252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.333281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.337031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.337250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.337268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.341054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.341235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.345114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.345312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.345329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.349180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.349221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.349250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.353037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.353237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.353254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.357082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.357249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.357266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.361132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.361344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.361360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.365108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.365137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.365165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.368964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.369175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.369303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.373124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.373375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.373495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.377335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.377534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.377650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.381537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.381764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.381885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.385778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.385972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.386104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.741 [2024-12-06 12:24:33.390253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:46.741 [2024-12-06 12:24:33.390457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.741 [2024-12-06 12:24:33.390578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.026 [2024-12-06 12:24:33.395168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x81a620) 00:16:47.026 [2024-12-06 12:24:33.395405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.026 [2024-12-06 12:24:33.395644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.026 7695.50 IOPS, 961.94 MiB/s 00:16:47.026 Latency(us) 00:16:47.026 [2024-12-06T12:24:33.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.026 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:47.026 nvme0n1 : 2.00 7689.11 961.14 0.00 0.00 2077.73 1697.98 7387.69 00:16:47.026 [2024-12-06T12:24:33.684Z] =================================================================================================================== 00:16:47.026 [2024-12-06T12:24:33.684Z] Total : 7689.11 961.14 0.00 0.00 2077.73 1697.98 7387.69 00:16:47.026 { 00:16:47.026 "results": [ 00:16:47.026 { 00:16:47.026 "job": "nvme0n1", 00:16:47.026 "core_mask": "0x2", 00:16:47.026 "workload": "randread", 00:16:47.026 "status": "finished", 00:16:47.026 "queue_depth": 16, 00:16:47.026 "io_size": 131072, 00:16:47.026 "runtime": 2.003743, 00:16:47.026 "iops": 7689.1098309513745, 00:16:47.026 "mibps": 961.1387288689218, 00:16:47.026 "io_failed": 0, 00:16:47.026 "io_timeout": 0, 00:16:47.026 "avg_latency_us": 2077.733020055819, 00:16:47.026 "min_latency_us": 1697.9781818181818, 00:16:47.026 "max_latency_us": 7387.694545454546 00:16:47.026 } 00:16:47.026 ], 00:16:47.026 "core_count": 1 00:16:47.026 } 00:16:47.026 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:47.026 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:47.026 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:47.026 | .driver_specific 00:16:47.026 | .nvme_error 00:16:47.026 | .status_code 00:16:47.026 | .command_transient_transport_error' 00:16:47.026 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 497 > 0 )) 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79646 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79646 ']' 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79646 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79646 00:16:47.298 killing process with pid 79646 00:16:47.298 Received shutdown signal, test time was about 2.000000 seconds 00:16:47.298 00:16:47.298 Latency(us) 00:16:47.298 [2024-12-06T12:24:33.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.298 [2024-12-06T12:24:33.956Z] =================================================================================================================== 00:16:47.298 [2024-12-06T12:24:33.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79646' 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79646 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79646 00:16:47.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79699 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79699 /var/tmp/bperf.sock 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79699 ']' 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.298 12:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:47.298 [2024-12-06 12:24:33.921910] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:47.298 [2024-12-06 12:24:33.922226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79699 ] 00:16:47.558 [2024-12-06 12:24:34.062155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.558 [2024-12-06 12:24:34.092310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.558 [2024-12-06 12:24:34.120745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:48.497 12:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.497 12:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:48.497 12:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:48.497 12:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:48.497 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:48.497 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:48.755 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.755 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.755 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:49.014 nvme0n1 00:16:49.014 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:49.014 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.014 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:49.014 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.014 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:49.014 12:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:49.014 Running I/O for 2 seconds... 00:16:49.014 [2024-12-06 12:24:35.592182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef7100 00:16:49.014 [2024-12-06 12:24:35.593894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.014 [2024-12-06 12:24:35.594098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:49.014 [2024-12-06 12:24:35.606721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef7970 00:16:49.014 [2024-12-06 12:24:35.608377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.014 [2024-12-06 12:24:35.608579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.014 [2024-12-06 12:24:35.620954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef81e0 00:16:49.014 [2024-12-06 12:24:35.622488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.014 [2024-12-06 12:24:35.622524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:49.014 [2024-12-06 12:24:35.634433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef8a50 00:16:49.014 [2024-12-06 12:24:35.636217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.014 [2024-12-06 12:24:35.636257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:49.014 [2024-12-06 12:24:35.648270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef92c0 00:16:49.014 [2024-12-06 12:24:35.649677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.014 [2024-12-06 12:24:35.649708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:49.014 [2024-12-06 12:24:35.661709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef9b30 00:16:49.014 [2024-12-06 12:24:35.663139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.014 [2024-12-06 12:24:35.663197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.676782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efa3a0 00:16:49.273 [2024-12-06 12:24:35.678135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.273 [2024-12-06 12:24:35.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.690275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efac10 00:16:49.273 [2024-12-06 12:24:35.691709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.273 [2024-12-06 12:24:35.691739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.703864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efb480 00:16:49.273 [2024-12-06 12:24:35.705409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.273 [2024-12-06 12:24:35.705433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.717539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efbcf0 00:16:49.273 [2024-12-06 12:24:35.718845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.273 [2024-12-06 12:24:35.718875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.731084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efc560 00:16:49.273 [2024-12-06 12:24:35.732513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.273 [2024-12-06 12:24:35.732545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.744650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efcdd0 00:16:49.273 [2024-12-06 12:24:35.745913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.273 [2024-12-06 12:24:35.745944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:49.273 [2024-12-06 12:24:35.758162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efd640 00:16:49.274 [2024-12-06 12:24:35.759515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.759548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.771508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efdeb0 00:16:49.274 [2024-12-06 12:24:35.772797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.772827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.785040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efe720 00:16:49.274 [2024-12-06 12:24:35.786339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.786369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.799122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eff3c8 00:16:49.274 [2024-12-06 12:24:35.800634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.800661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.821593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eff3c8 00:16:49.274 [2024-12-06 12:24:35.824366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.824401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.836677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efe720 00:16:49.274 [2024-12-06 12:24:35.839350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.839384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.850970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efdeb0 00:16:49.274 [2024-12-06 12:24:35.853357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.853387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.864640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efd640 00:16:49.274 [2024-12-06 12:24:35.866914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.866944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.878165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efcdd0 00:16:49.274 [2024-12-06 12:24:35.880422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.880453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.891582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efc560 00:16:49.274 [2024-12-06 12:24:35.894154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.894390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.906595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efbcf0 00:16:49.274 [2024-12-06 12:24:35.908920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.909114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:49.274 [2024-12-06 12:24:35.920646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efb480 00:16:49.274 [2024-12-06 12:24:35.922946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.274 [2024-12-06 12:24:35.923140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:49.533 [2024-12-06 12:24:35.935870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efac10 00:16:49.533 [2024-12-06 12:24:35.938150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.533 [2024-12-06 12:24:35.938398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:49.533 [2024-12-06 12:24:35.950100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efa3a0 00:16:49.533 [2024-12-06 12:24:35.952511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.533 [2024-12-06 12:24:35.952704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:49.533 [2024-12-06 12:24:35.964385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef9b30 00:16:49.533 [2024-12-06 12:24:35.966676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.533 [2024-12-06 12:24:35.966871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:49.533 [2024-12-06 12:24:35.978593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef92c0 00:16:49.533 [2024-12-06 12:24:35.980810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.533 [2024-12-06 12:24:35.981004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:49.533 [2024-12-06 12:24:35.992681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef8a50 00:16:49.533 [2024-12-06 12:24:35.994895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.533 [2024-12-06 12:24:35.995074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:49.533 [2024-12-06 12:24:36.006857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef81e0 00:16:49.533 [2024-12-06 12:24:36.009104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.009327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.020830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef7970 00:16:49.534 [2024-12-06 12:24:36.023034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.023262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.034938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef7100 00:16:49.534 [2024-12-06 12:24:36.037100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.037327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.048792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef6890 00:16:49.534 [2024-12-06 12:24:36.050985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.051201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.062565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef6020 00:16:49.534 [2024-12-06 12:24:36.064829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.064861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.076159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef57b0 00:16:49.534 [2024-12-06 12:24:36.078137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.078191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.089392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef4f40 00:16:49.534 [2024-12-06 12:24:36.091379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.091411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.102544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef46d0 00:16:49.534 [2024-12-06 12:24:36.104479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.104509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.115689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef3e60 00:16:49.534 [2024-12-06 12:24:36.117642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.117672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.128970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef35f0 00:16:49.534 [2024-12-06 12:24:36.130982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.131012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.142386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef2d80 00:16:49.534 [2024-12-06 12:24:36.144291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.144320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.155551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef2510 00:16:49.534 [2024-12-06 12:24:36.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.157775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.169176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef1ca0 00:16:49.534 [2024-12-06 12:24:36.170985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.171015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:49.534 [2024-12-06 12:24:36.182494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef1430 00:16:49.534 [2024-12-06 12:24:36.184639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.534 [2024-12-06 12:24:36.184672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.197549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef0bc0 00:16:49.794 [2024-12-06 12:24:36.199755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.199786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.211165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef0350 00:16:49.794 [2024-12-06 12:24:36.213032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.213062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.224717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eefae0 00:16:49.794 [2024-12-06 12:24:36.226612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.226640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.238142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eef270 00:16:49.794 [2024-12-06 12:24:36.240267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.240320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.251673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eeea00 00:16:49.794 [2024-12-06 12:24:36.253501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.253531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.265101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eee190 00:16:49.794 [2024-12-06 12:24:36.266902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.266932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.278566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eed920 00:16:49.794 [2024-12-06 12:24:36.280348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.280381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.292019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eed0b0 00:16:49.794 [2024-12-06 12:24:36.293858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.293883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.305524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eec840 00:16:49.794 [2024-12-06 12:24:36.307180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.307237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.318878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eebfd0 00:16:49.794 [2024-12-06 12:24:36.320690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.320736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:49.794 [2024-12-06 12:24:36.332546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eeb760 00:16:49.794 [2024-12-06 12:24:36.334179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.794 [2024-12-06 12:24:36.334235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.345952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eeaef0 00:16:49.795 [2024-12-06 12:24:36.347728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.347757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.359518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eea680 00:16:49.795 [2024-12-06 12:24:36.361161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.361217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.372948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee9e10 00:16:49.795 [2024-12-06 12:24:36.374602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.374632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.386352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee95a0 00:16:49.795 [2024-12-06 12:24:36.387954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.401522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee8d30 00:16:49.795 [2024-12-06 12:24:36.403123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.403153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.415125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee84c0 00:16:49.795 [2024-12-06 12:24:36.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.416833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.428711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee7c50 00:16:49.795 [2024-12-06 12:24:36.430307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.430337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:49.795 [2024-12-06 12:24:36.442621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee73e0 00:16:49.795 [2024-12-06 12:24:36.444307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.795 [2024-12-06 12:24:36.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.458837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee6b70 00:16:50.055 [2024-12-06 12:24:36.460550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.460582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.474644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee6300 00:16:50.055 [2024-12-06 12:24:36.476231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.476272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.489318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee5a90 00:16:50.055 [2024-12-06 12:24:36.490843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.490875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.503423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee5220 00:16:50.055 [2024-12-06 12:24:36.505148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.505200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.517798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee49b0 00:16:50.055 [2024-12-06 12:24:36.519295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.519327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.531838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee4140 00:16:50.055 [2024-12-06 12:24:36.533303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.533334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.545782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee38d0 00:16:50.055 [2024-12-06 12:24:36.547334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.547366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.559857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee3060 00:16:50.055 [2024-12-06 12:24:36.561458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.561482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.574240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee27f0 00:16:50.055 [2024-12-06 12:24:36.575879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.575912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:50.055 18091.00 IOPS, 70.67 MiB/s [2024-12-06T12:24:36.713Z] [2024-12-06 12:24:36.588949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee1f80 00:16:50.055 [2024-12-06 12:24:36.590359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.590391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.603195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee1710 00:16:50.055 [2024-12-06 12:24:36.604597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.604629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.617833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee0ea0 00:16:50.055 [2024-12-06 12:24:36.619353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.619549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.632784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee0630 00:16:50.055 [2024-12-06 12:24:36.634203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.634255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.646582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edfdc0 00:16:50.055 [2024-12-06 12:24:36.648043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.648276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.660717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edf550 00:16:50.055 [2024-12-06 12:24:36.662136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.662361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.674727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edece0 00:16:50.055 [2024-12-06 12:24:36.676137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.676360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.688647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ede470 00:16:50.055 [2024-12-06 12:24:36.689904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.689936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:50.055 [2024-12-06 12:24:36.707708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eddc00 00:16:50.055 [2024-12-06 12:24:36.710283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-06 12:24:36.710313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.722006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ede470 00:16:50.316 [2024-12-06 12:24:36.724456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.724486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.735664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edece0 00:16:50.316 [2024-12-06 12:24:36.737912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.737942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.749150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edf550 00:16:50.316 [2024-12-06 12:24:36.751505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.751538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.762792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edfdc0 00:16:50.316 [2024-12-06 12:24:36.765055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.765084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.776288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee0630 00:16:50.316 [2024-12-06 12:24:36.778485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.778514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.789620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee0ea0 00:16:50.316 [2024-12-06 12:24:36.791851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.791882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.803044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee1710 00:16:50.316 [2024-12-06 12:24:36.805373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.805403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.816493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee1f80 00:16:50.316 [2024-12-06 12:24:36.818898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.818931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.831803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee27f0 00:16:50.316 [2024-12-06 12:24:36.834490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.834540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.847785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee3060 00:16:50.316 [2024-12-06 12:24:36.850065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.316 [2024-12-06 12:24:36.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:50.316 [2024-12-06 12:24:36.862599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee38d0 00:16:50.317 [2024-12-06 12:24:36.864905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.864936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.877218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee4140 00:16:50.317 [2024-12-06 12:24:36.879572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.879798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.892552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee49b0 00:16:50.317 [2024-12-06 12:24:36.895327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.895362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.906827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee5220 00:16:50.317 [2024-12-06 12:24:36.908941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.908971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.920353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee5a90 00:16:50.317 [2024-12-06 12:24:36.922386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.922416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.933713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee6300 00:16:50.317 [2024-12-06 12:24:36.935784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.935813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.948280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee6b70 00:16:50.317 [2024-12-06 12:24:36.950288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.950319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:50.317 [2024-12-06 12:24:36.962446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee73e0 00:16:50.317 [2024-12-06 12:24:36.964751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.317 [2024-12-06 12:24:36.964781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:36.977733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee7c50 00:16:50.577 [2024-12-06 12:24:36.979936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:36.979963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:36.991433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee84c0 00:16:50.577 [2024-12-06 12:24:36.993754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:36.993784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.005452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee8d30 00:16:50.577 [2024-12-06 12:24:37.007401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.007604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.019165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee95a0 00:16:50.577 [2024-12-06 12:24:37.021190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.021231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.032572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ee9e10 00:16:50.577 [2024-12-06 12:24:37.034467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.034497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.046030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eea680 00:16:50.577 [2024-12-06 12:24:37.048041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.048070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.059979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eeaef0 00:16:50.577 [2024-12-06 12:24:37.062140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.062194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.073700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eeb760 00:16:50.577 [2024-12-06 12:24:37.075627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.087480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eebfd0 00:16:50.577 [2024-12-06 12:24:37.089644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.089675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.101294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eec840 00:16:50.577 [2024-12-06 12:24:37.103086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.103117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.114683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eed0b0 00:16:50.577 [2024-12-06 12:24:37.116529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.116560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.128053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eed920 00:16:50.577 [2024-12-06 12:24:37.129952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.129982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.141619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eee190 00:16:50.577 [2024-12-06 12:24:37.143471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.143503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.154952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eeea00 00:16:50.577 [2024-12-06 12:24:37.156877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.156903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:50.577 [2024-12-06 12:24:37.168639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eef270 00:16:50.577 [2024-12-06 12:24:37.170370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.577 [2024-12-06 12:24:37.170399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:50.578 [2024-12-06 12:24:37.181992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016eefae0 00:16:50.578 [2024-12-06 12:24:37.183844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.578 [2024-12-06 12:24:37.183873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:50.578 [2024-12-06 12:24:37.195511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef0350 00:16:50.578 [2024-12-06 12:24:37.197201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.578 [2024-12-06 12:24:37.197413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:50.578 [2024-12-06 12:24:37.209285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef0bc0 00:16:50.578 [2024-12-06 12:24:37.211090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.578 [2024-12-06 12:24:37.211362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:50.578 [2024-12-06 12:24:37.223532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef1430 00:16:50.578 [2024-12-06 12:24:37.225311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.578 [2024-12-06 12:24:37.225512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:50.837 [2024-12-06 12:24:37.238732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef1ca0 00:16:50.837 [2024-12-06 12:24:37.240610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.837 [2024-12-06 12:24:37.240804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:50.837 [2024-12-06 12:24:37.252761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef2510 00:16:50.837 [2024-12-06 12:24:37.254606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.837 [2024-12-06 12:24:37.254807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:50.837 [2024-12-06 12:24:37.267333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef2d80 00:16:50.837 [2024-12-06 12:24:37.269204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.837 [2024-12-06 12:24:37.269406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:50.837 [2024-12-06 12:24:37.281970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef35f0 00:16:50.837 [2024-12-06 12:24:37.283814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.283992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.296175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef3e60 00:16:50.838 [2024-12-06 12:24:37.297933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.298125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.310074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef46d0 00:16:50.838 [2024-12-06 12:24:37.311869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.312065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.324309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef4f40 00:16:50.838 [2024-12-06 12:24:37.326004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.326222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.338257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef57b0 00:16:50.838 [2024-12-06 12:24:37.339812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.339844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.351667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef6020 00:16:50.838 [2024-12-06 12:24:37.353144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.353215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.365132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef6890 00:16:50.838 [2024-12-06 12:24:37.366700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.366731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.378959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef7100 00:16:50.838 [2024-12-06 12:24:37.380519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.380549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.392299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef7970 00:16:50.838 [2024-12-06 12:24:37.394054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.394084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.406153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef81e0 00:16:50.838 [2024-12-06 12:24:37.407863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.407893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.421193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef8a50 00:16:50.838 [2024-12-06 12:24:37.422650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.434590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef92c0 00:16:50.838 [2024-12-06 12:24:37.436020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.436049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.448054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016ef9b30 00:16:50.838 [2024-12-06 12:24:37.449565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.449591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.461614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efa3a0 00:16:50.838 [2024-12-06 12:24:37.463009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.463039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.475194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efac10 00:16:50.838 [2024-12-06 12:24:37.476562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.476592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:50.838 [2024-12-06 12:24:37.488490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efb480 00:16:50.838 [2024-12-06 12:24:37.489998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.838 [2024-12-06 12:24:37.490030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:51.098 [2024-12-06 12:24:37.503178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efbcf0 00:16:51.098 [2024-12-06 12:24:37.504589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.504619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:51.098 [2024-12-06 12:24:37.516760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efc560 00:16:51.098 [2024-12-06 12:24:37.518067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.518097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:51.098 [2024-12-06 12:24:37.530284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efcdd0 00:16:51.098 [2024-12-06 12:24:37.531637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.531684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:51.098 [2024-12-06 12:24:37.543785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efd640 00:16:51.098 [2024-12-06 12:24:37.545180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.545225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:51.098 [2024-12-06 12:24:37.557228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efdeb0 00:16:51.098 [2024-12-06 12:24:37.558522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.558569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:51.098 [2024-12-06 12:24:37.570632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016efe720 00:16:51.098 [2024-12-06 12:24:37.571899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.571929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:51.098 18154.00 IOPS, 70.91 MiB/s [2024-12-06T12:24:37.756Z] [2024-12-06 12:24:37.586559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559b70) with pdu=0x200016edece0 00:16:51.098 [2024-12-06 12:24:37.586748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.098 [2024-12-06 12:24:37.586767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:51.098 00:16:51.098 Latency(us) 00:16:51.098 [2024-12-06T12:24:37.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.098 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.098 nvme0n1 : 2.01 18163.58 70.95 0.00 0.00 7033.78 1936.29 26691.03 00:16:51.098 [2024-12-06T12:24:37.756Z] =================================================================================================================== 00:16:51.098 [2024-12-06T12:24:37.756Z] Total : 18163.58 70.95 0.00 0.00 7033.78 1936.29 26691.03 00:16:51.098 { 00:16:51.098 "results": [ 00:16:51.098 { 00:16:51.098 "job": "nvme0n1", 00:16:51.099 "core_mask": "0x2", 00:16:51.099 "workload": "randwrite", 00:16:51.099 "status": "finished", 00:16:51.099 "queue_depth": 128, 00:16:51.099 "io_size": 4096, 00:16:51.099 "runtime": 2.007148, 00:16:51.099 "iops": 18163.583353096034, 00:16:51.099 "mibps": 70.95149747303138, 00:16:51.099 "io_failed": 0, 00:16:51.099 "io_timeout": 0, 00:16:51.099 "avg_latency_us": 7033.780461465189, 00:16:51.099 "min_latency_us": 1936.290909090909, 00:16:51.099 "max_latency_us": 26691.025454545455 00:16:51.099 } 00:16:51.099 ], 00:16:51.099 "core_count": 1 00:16:51.099 } 00:16:51.099 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:51.099 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:51.099 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:51.099 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:51.099 | .driver_specific 00:16:51.099 | .nvme_error 00:16:51.099 | .status_code 00:16:51.099 | .command_transient_transport_error' 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79699 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79699 ']' 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79699 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79699 00:16:51.358 killing process with pid 79699 00:16:51.358 Received shutdown signal, test time was about 2.000000 seconds 00:16:51.358 00:16:51.358 Latency(us) 00:16:51.358 [2024-12-06T12:24:38.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.358 [2024-12-06T12:24:38.016Z] =================================================================================================================== 00:16:51.358 [2024-12-06T12:24:38.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79699' 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79699 00:16:51.358 12:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79699 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79759 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79759 /var/tmp/bperf.sock 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79759 ']' 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:51.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.618 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.618 [2024-12-06 12:24:38.126084] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:51.618 [2024-12-06 12:24:38.126394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.618 Zero copy mechanism will not be used. 00:16:51.618 llocations --file-prefix=spdk_pid79759 ] 00:16:51.618 [2024-12-06 12:24:38.270507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.877 [2024-12-06 12:24:38.299315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.877 [2024-12-06 12:24:38.326432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.877 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.877 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:51.877 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:51.877 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:52.137 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:52.137 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.137 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:52.137 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.137 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.137 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.396 nvme0n1 00:16:52.396 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:52.396 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.396 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:52.396 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.396 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:52.396 12:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:52.656 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:52.656 Zero copy mechanism will not be used. 00:16:52.656 Running I/O for 2 seconds... 00:16:52.656 [2024-12-06 12:24:39.089596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.089683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.089709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.094253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.094332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.094353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.099360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.099470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.099492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.105491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.105603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.105623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.110525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.110624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.110644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.114878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.114975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.114995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.119450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.119531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.119551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.124068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.124386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.124408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.128836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.128914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.128933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.133471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.133531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.133552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.137944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.138019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.138039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.142483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.142570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.142589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.146760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.146869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.146889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.151223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.151322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.151343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.155618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.155707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.656 [2024-12-06 12:24:39.155728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.656 [2024-12-06 12:24:39.160154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.656 [2024-12-06 12:24:39.160455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.160477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.164830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.164938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.164959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.169395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.169485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.169505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.173800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.173886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.173905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.178240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.178320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.178340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.182654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.182729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.182749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.187016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.187096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.187115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.191523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.191641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.191675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.195956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.196197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.196235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.200826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.200901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.200921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.205242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.205331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.205350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.209680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.209754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.209773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.214112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.214215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.214236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.218593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.218669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.218688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.223290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.223353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.223374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.227876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.228117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.228138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.232629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.232704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.232724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.237044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.237128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.237147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.241489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.241581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.241600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.245880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.245956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.245975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.250487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.250574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.250593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.254871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.254976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.255018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.259619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.259723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.259743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.264297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.264398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.264419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.269053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.269148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.269168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.273828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.273916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.273935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.278365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.278442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.278462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.282894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.282975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.282995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.287491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.287584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.287636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.292018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.292277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.292299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.296724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.296801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.296820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.301122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.301233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.301253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.305522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.305633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.305652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.657 [2024-12-06 12:24:39.310412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.657 [2024-12-06 12:24:39.310490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.657 [2024-12-06 12:24:39.310510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.315018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.315103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.315122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.319842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.320095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.324589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.324664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.324683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.329018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.329098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.329118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.333625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.333711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.333731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.338026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.338129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.338148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.342636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.342729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.342748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.346986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.347065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.347085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.351740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.351815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.351834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.356139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.356273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.356293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.360724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.360801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.360820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.365126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.365245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.365266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.369548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.369639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.369673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.374009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.374084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.374104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.378496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.378582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.378601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.382868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.382954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.382973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.387290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.387386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.387407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.391732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.391807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.391826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.396198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.396314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.396334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.400663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.400739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.919 [2024-12-06 12:24:39.400758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.919 [2024-12-06 12:24:39.405034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.919 [2024-12-06 12:24:39.405109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.405128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.409470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.409557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.409591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.413837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.413930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.413949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.418256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.418339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.418359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.422661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.422736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.422755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.427034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.427120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.427139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.431564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.431641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.431675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.436097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.436183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.436217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.440540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.440627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.440646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.444972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.445233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.445254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.449779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.449873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.449892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.454248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.454334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.454353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.458599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.458685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.458704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.462976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.463071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.463090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.467460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.467554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.467575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.471983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.472069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.472088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.476465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.476537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.476556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.480814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.480891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.480910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.485296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.485371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.485390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.489668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.489742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.489761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.494235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.494322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.494342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.498596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.498688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.498707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.502984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.503058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.503076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.507494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.507612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.507631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.512021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.512279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.512301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.516700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.516774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.516794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.521054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.920 [2024-12-06 12:24:39.521131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.920 [2024-12-06 12:24:39.521150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.920 [2024-12-06 12:24:39.525615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.525707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.525727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.530004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.530099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.530118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.534587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.534673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.534692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.538973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.539058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.539078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.543582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.543713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.543732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.548062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.548350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.548371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.552841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.552937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.552956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.557357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.557460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.561818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.561904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.561923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.566259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.566345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.566364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:52.921 [2024-12-06 12:24:39.570910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:52.921 [2024-12-06 12:24:39.571011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.921 [2024-12-06 12:24:39.571030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.576008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.576286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.576307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.581087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.581166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.581203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.585662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.585756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.590033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.590108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.590127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.594601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.594675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.594695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.599061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.599143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.599161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.603633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.603774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.608127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.608210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.608230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.612556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.612626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.612645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.616892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.616981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.617000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.621367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.621458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.621478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.625836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.625918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.625937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.631498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.631602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.631653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.637284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.637362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.637382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.641690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.641763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.641783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.646115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.646184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.646203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.650519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.650620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.654871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.654954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.659398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.659458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.659479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.663827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.663897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.663916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.668290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.668358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.182 [2024-12-06 12:24:39.668377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.182 [2024-12-06 12:24:39.672724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.182 [2024-12-06 12:24:39.672793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.672813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.677102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.677220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.677239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.681555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.681654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.681674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.686008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.686079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.686099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.690501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.690570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.690589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.694868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.694937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.694956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.699356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.699432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.699454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.703867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.703939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.703958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.708426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.708506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.708526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.713071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.713164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.713227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.717735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.717806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.717825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.722341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.722410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.722430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.726704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.726787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.726806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.731125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.731240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.731262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.735929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.735999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.736019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.741063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.741129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.741151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.746731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.746815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.746835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.752050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.752140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.752161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.756870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.756978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.761603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.761674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.761693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.765993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.766082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.766101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.770462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.770552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.770571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.774780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.774870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.774889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.779197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.779331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.779351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.783647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.783732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.783752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.788129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.788229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.788249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.792565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.792655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.183 [2024-12-06 12:24:39.792674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.183 [2024-12-06 12:24:39.796973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.183 [2024-12-06 12:24:39.797062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.797082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.801483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.801590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.801609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.805873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.805942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.805962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.810273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.810355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.814678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.814748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.814767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.818991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.819079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.819099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.823548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.823688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.823707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.828004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.828073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.828093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.184 [2024-12-06 12:24:39.832647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.184 [2024-12-06 12:24:39.832734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.184 [2024-12-06 12:24:39.832755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.837778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.837862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.445 [2024-12-06 12:24:39.837882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.842452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.842528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.445 [2024-12-06 12:24:39.842563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.847103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.847199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.445 [2024-12-06 12:24:39.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.851825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.851895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.445 [2024-12-06 12:24:39.851913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.856292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.856362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.445 [2024-12-06 12:24:39.856382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.860710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.860801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.445 [2024-12-06 12:24:39.860835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.445 [2024-12-06 12:24:39.865162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.445 [2024-12-06 12:24:39.865260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.865280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.869593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.869663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.869682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.874141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.874257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.874277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.878436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.878515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.878534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.882907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.882980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.883001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.887866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.887940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.887960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.892846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.892934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.892954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.897899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.897972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.897992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.903341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.903421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.903444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.908469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.908583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.908634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.913481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.913607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.913626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.918146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.918257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.918279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.922891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.922967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.922986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.927701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.927771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.927790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.932333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.932403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.932423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.936779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.936849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.936868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.941218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.941308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.941328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.945785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.945868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.945887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.950205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.950275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.950294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.954565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.954636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.954655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.959004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.959073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.959093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.963490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.963559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.963581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.967887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.967957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.967976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.972315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.972384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.972403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.976725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.976794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.976813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.446 [2024-12-06 12:24:39.981102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.446 [2024-12-06 12:24:39.981198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.446 [2024-12-06 12:24:39.981218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:39.985467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:39.985560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:39.985579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:39.989796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:39.989885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:39.989904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:39.994193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:39.994277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:39.994296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:39.998798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:39.998892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:39.998927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.004288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.004373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.004397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.009322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.009406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.009430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.014663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.014754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.014777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.020085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.020151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.020175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.025563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.025638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.025660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.030803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.030875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.030896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.035871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.035986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.036006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.040741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.040872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.040894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.045626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.045709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.045728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.050393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.050476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.050496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.054996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.055067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.055087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.059922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.060006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.060025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.064672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.064757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.064777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.069209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.069279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.069298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.073978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.074054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.078627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.078712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.078731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.447 6673.00 IOPS, 834.12 MiB/s [2024-12-06T12:24:40.105Z] [2024-12-06 12:24:40.084335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.084421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.084441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.088993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.089070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.089090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.093685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.093770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.093790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.447 [2024-12-06 12:24:40.098813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.447 [2024-12-06 12:24:40.098916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.447 [2024-12-06 12:24:40.098937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.104155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.104269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.104291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.109197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.109276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.109295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.113792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.113863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.113883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.118528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.118601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.118622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.123407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.123501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.123524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.127967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.128038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.128058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.132515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.132585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.132605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.137436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.137521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.137540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.142047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.142133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.142152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.146692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.146777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.146797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.151433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.151509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.151531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.156222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.156337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.156357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.162097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.162208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.162239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.167888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.167983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.168004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.172510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.172585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.172604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.177054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.177139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.177158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.181680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.181752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.181771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.186439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.186557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.190961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.191031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.191049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.195476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.195566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.195586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.199874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.199944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.199963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.204290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.204365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.204400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.208822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.208913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.208932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.213236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.213306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.213324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.217604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.217694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.709 [2024-12-06 12:24:40.217713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.709 [2024-12-06 12:24:40.222074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.709 [2024-12-06 12:24:40.222199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.222232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.226696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.226768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.226787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.231070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.231141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.231160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.235722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.235791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.235810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.240141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.240234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.240254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.244648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.244718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.244737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.249155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.249258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.249278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.253609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.253694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.253712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.258015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.258099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.258118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.262634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.262727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.262746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.267263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.267382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.267404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.271926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.272011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.272030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.276548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.276630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.276649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.280981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.281071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.281090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.285490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.285563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.285582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.289829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.289919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.294229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.294312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.294331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.298599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.298671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.298690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.302938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.303007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.303026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.307473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.307547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.307567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.311896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.311966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.311985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.316356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.316438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.316457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.320884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.320984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.321003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.325474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.325565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.325585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.329989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.330081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.334504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.334605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.334624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.338846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.338916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.710 [2024-12-06 12:24:40.338935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.710 [2024-12-06 12:24:40.343413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.710 [2024-12-06 12:24:40.343477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.711 [2024-12-06 12:24:40.343498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.711 [2024-12-06 12:24:40.347822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.711 [2024-12-06 12:24:40.347909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.711 [2024-12-06 12:24:40.347928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.711 [2024-12-06 12:24:40.352417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.711 [2024-12-06 12:24:40.352488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.711 [2024-12-06 12:24:40.352508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.711 [2024-12-06 12:24:40.356888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.711 [2024-12-06 12:24:40.356958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.711 [2024-12-06 12:24:40.356978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.711 [2024-12-06 12:24:40.361759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.711 [2024-12-06 12:24:40.361847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.711 [2024-12-06 12:24:40.361881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.971 [2024-12-06 12:24:40.366673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.971 [2024-12-06 12:24:40.366744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.971 [2024-12-06 12:24:40.366763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.971 [2024-12-06 12:24:40.371509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.971 [2024-12-06 12:24:40.371610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.971 [2024-12-06 12:24:40.371645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.971 [2024-12-06 12:24:40.375987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.971 [2024-12-06 12:24:40.376083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.971 [2024-12-06 12:24:40.376102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.971 [2024-12-06 12:24:40.380490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.971 [2024-12-06 12:24:40.380566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.971 [2024-12-06 12:24:40.380585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.384953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.385036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.385055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.389417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.389506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.389524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.393808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.393896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.398248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.398318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.398337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.402615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.402705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.402724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.406915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.406986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.407005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.411369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.411456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.411477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.415630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.415716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.415735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.420063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.420153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.420172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.424600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.424684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.424703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.429054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.429138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.429158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.433519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.433610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.433629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.437870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.442218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.442289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.442308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.446567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.446656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.446675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.450857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.450940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.450959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.455239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.455349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.459648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.459744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.459762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.464082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.464172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.464191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.468485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.468553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.472913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.473004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.473022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.477327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.477407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.477426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.481719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.481810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.481829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.486064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.486147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.486166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.490438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.490528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.490547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.494713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.494771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.972 [2024-12-06 12:24:40.494790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.972 [2024-12-06 12:24:40.499116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.972 [2024-12-06 12:24:40.499233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.499253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.503684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.503768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.503787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.508122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.508221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.512505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.512584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.512603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.516855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.516935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.516954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.521288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.521357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.521377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.525721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.525790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.525809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.530115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.530184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.530204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.534421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.534511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.534530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.538694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.538786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.538805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.543046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.543116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.543135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.547572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.547679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.547699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.552052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.552141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.552160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.556488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.556578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.556597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.560896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.560985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.561004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.565764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.565836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.565856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.570229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.570314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.570333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.574617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.574687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.574706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.578950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.579022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.579041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.583414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.583495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.583516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.587791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.587859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.587878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.592207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.592298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.592317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.596541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.596614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.596632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.600969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.601041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.601060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.605509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.605599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.605618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.609888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.609989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.614303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.614373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.973 [2024-12-06 12:24:40.614392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:53.973 [2024-12-06 12:24:40.618657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.973 [2024-12-06 12:24:40.618746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.974 [2024-12-06 12:24:40.618766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:53.974 [2024-12-06 12:24:40.623221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:53.974 [2024-12-06 12:24:40.623351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.974 [2024-12-06 12:24:40.623371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.628351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.628441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.632944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.633035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.633056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.637610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.637694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.637714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.642008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.642091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.642111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.646356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.646444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.646463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.650704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.650786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.650805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.655130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.655232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.659705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.659774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.659793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.664109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.664178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.664197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.668546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.668637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.668656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.672878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.672950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.672969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.677419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.677500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.677520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.681786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.681885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.681904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.687584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.687720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.687739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.693486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.693580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.693600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.698072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.698153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.698173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.702531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.702623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.702641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.707031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.707117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.707137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.711703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.711794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.235 [2024-12-06 12:24:40.711813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.235 [2024-12-06 12:24:40.716056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.235 [2024-12-06 12:24:40.716128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.716148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.720562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.720659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.724897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.724988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.725008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.729471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.729556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.729576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.733809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.733889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.733908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.738345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.738415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.738434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.742708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.742768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.742786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.747036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.747110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.747129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.751667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.751752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.751771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.756122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.756205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.756224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.760557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.760640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.760659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.764926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.764998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.765017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.769435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.769519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.769538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.773727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.773823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.773843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.778147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.778242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.778262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.782481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.782564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.782583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.786707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.786791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.786811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.791120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.791204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.791224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.795593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.795692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.795711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.800018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.800091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.800110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.804572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.804663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.804682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.808961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.809045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.813391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.813475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.813494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.817714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.817784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.236 [2024-12-06 12:24:40.822125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.236 [2024-12-06 12:24:40.822206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.236 [2024-12-06 12:24:40.822226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.826454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.826527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.826546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.830831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.830915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.830933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.835375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.835445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.835465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.839749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.839840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.839859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.844183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.844271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.844290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.848577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.848646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.848665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.852995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.853067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.853086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.857382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.857471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.857490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.861729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.861812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.861831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.866108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.866178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.866208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.870480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.870573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.870593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.874799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.874870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.874889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.879151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.879239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.879258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.883756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.883842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.883860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.237 [2024-12-06 12:24:40.888760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.237 [2024-12-06 12:24:40.888844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.237 [2024-12-06 12:24:40.888863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.497 [2024-12-06 12:24:40.893508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.497 [2024-12-06 12:24:40.893592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.497 [2024-12-06 12:24:40.893610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.898293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.898362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.898381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.902667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.902751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.902769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.907318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.907400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.907423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.912162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.912316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.912337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.917100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.917174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.917228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.922197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.922324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.922344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.927600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.927730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.927765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.932857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.932927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.932946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.937819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.937892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.937910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.942781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.942849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.942868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.947494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.947568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.947621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.952467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.952568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.952588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.957028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.957116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.957136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.961612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.961696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.961714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.966087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.966178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.966198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.970576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.970679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.974967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.975050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.975068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.979469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.979569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.979620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.983950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.984020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.984039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.988488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.988578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.988598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.992892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.498 [2024-12-06 12:24:40.992981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.498 [2024-12-06 12:24:40.993000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.498 [2024-12-06 12:24:40.997331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:40.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:40.997436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.001733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.001823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.001842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.006135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.006233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.006252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.010596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.010666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.010685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.014862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.014942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.014960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.019373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.019447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.019468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.023731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.023820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.028193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.028274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.028294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.032560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.032656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.032675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.036910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.037006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.037027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.041386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.041478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.041498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.045834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.045924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.045944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.050363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.050453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.050472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.054872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.054963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.054982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.059231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.059355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.059376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.063703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.063784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.063803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.068181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.068274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.068293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.072599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.072683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.072701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:54.499 [2024-12-06 12:24:41.076928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1559d10) with pdu=0x200016eff3c8 00:16:54.499 [2024-12-06 12:24:41.077011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.499 [2024-12-06 12:24:41.077030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:54.499 6751.50 IOPS, 843.94 MiB/s 00:16:54.499 Latency(us) 00:16:54.499 [2024-12-06T12:24:41.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.499 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:54.499 nvme0n1 : 2.00 6750.32 843.79 0.00 0.00 2365.19 1727.77 13405.09 00:16:54.499 [2024-12-06T12:24:41.157Z] =================================================================================================================== 00:16:54.499 [2024-12-06T12:24:41.158Z] Total : 6750.32 843.79 0.00 0.00 2365.19 1727.77 13405.09 00:16:54.500 { 00:16:54.500 "results": [ 00:16:54.500 { 00:16:54.500 "job": "nvme0n1", 00:16:54.500 "core_mask": "0x2", 00:16:54.500 "workload": "randwrite", 00:16:54.500 "status": "finished", 00:16:54.500 "queue_depth": 16, 00:16:54.500 "io_size": 131072, 00:16:54.500 "runtime": 2.002721, 00:16:54.500 "iops": 6750.316194816952, 00:16:54.500 "mibps": 843.789524352119, 00:16:54.500 "io_failed": 0, 00:16:54.500 "io_timeout": 0, 00:16:54.500 "avg_latency_us": 2365.1885946378497, 00:16:54.500 "min_latency_us": 1727.7672727272727, 00:16:54.500 "max_latency_us": 13405.09090909091 00:16:54.500 } 00:16:54.500 ], 00:16:54.500 "core_count": 1 00:16:54.500 } 00:16:54.500 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:54.500 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:54.500 | .driver_specific 00:16:54.500 | .nvme_error 00:16:54.500 | .status_code 00:16:54.500 | .command_transient_transport_error' 00:16:54.500 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:54.500 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:54.759 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 436 > 0 )) 00:16:54.759 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79759 00:16:54.759 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79759 ']' 00:16:54.759 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79759 00:16:54.759 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:54.759 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.760 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79759 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:55.019 killing process with pid 79759 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79759' 00:16:55.019 Received shutdown signal, test time was about 2.000000 seconds 00:16:55.019 00:16:55.019 Latency(us) 00:16:55.019 [2024-12-06T12:24:41.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.019 [2024-12-06T12:24:41.677Z] =================================================================================================================== 00:16:55.019 [2024-12-06T12:24:41.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79759 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79759 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79571 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79571 ']' 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79571 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79571 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.019 killing process with pid 79571 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79571' 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79571 00:16:55.019 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79571 00:16:55.279 00:16:55.279 real 0m15.857s 00:16:55.279 user 0m30.586s 00:16:55.279 sys 0m4.310s 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:55.279 ************************************ 00:16:55.279 END TEST nvmf_digest_error 00:16:55.279 ************************************ 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.279 rmmod nvme_tcp 00:16:55.279 rmmod nvme_fabrics 00:16:55.279 rmmod nvme_keyring 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79571 ']' 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79571 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79571 ']' 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79571 00:16:55.279 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79571) - No such process 00:16:55.279 Process with pid 79571 is not found 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79571 is not found' 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.279 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.539 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.539 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.539 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.539 12:24:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:16:55.539 00:16:55.539 real 0m32.257s 00:16:55.539 user 1m0.876s 00:16:55.539 sys 0m9.011s 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:55.539 ************************************ 00:16:55.539 END TEST nvmf_digest 00:16:55.539 ************************************ 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.539 ************************************ 00:16:55.539 START TEST nvmf_host_multipath 00:16:55.539 ************************************ 00:16:55.539 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:55.800 * Looking for test storage... 00:16:55.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:55.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.800 --rc genhtml_branch_coverage=1 00:16:55.800 --rc genhtml_function_coverage=1 00:16:55.800 --rc genhtml_legend=1 00:16:55.800 --rc geninfo_all_blocks=1 00:16:55.800 --rc geninfo_unexecuted_blocks=1 00:16:55.800 00:16:55.800 ' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:55.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.800 --rc genhtml_branch_coverage=1 00:16:55.800 --rc genhtml_function_coverage=1 00:16:55.800 --rc genhtml_legend=1 00:16:55.800 --rc geninfo_all_blocks=1 00:16:55.800 --rc geninfo_unexecuted_blocks=1 00:16:55.800 00:16:55.800 ' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:55.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.800 --rc genhtml_branch_coverage=1 00:16:55.800 --rc genhtml_function_coverage=1 00:16:55.800 --rc genhtml_legend=1 00:16:55.800 --rc geninfo_all_blocks=1 00:16:55.800 --rc geninfo_unexecuted_blocks=1 00:16:55.800 00:16:55.800 ' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:55.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.800 --rc genhtml_branch_coverage=1 00:16:55.800 --rc genhtml_function_coverage=1 00:16:55.800 --rc genhtml_legend=1 00:16:55.800 --rc geninfo_all_blocks=1 00:16:55.800 --rc geninfo_unexecuted_blocks=1 00:16:55.800 00:16:55.800 ' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.800 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.800 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:55.801 Cannot find device "nvmf_init_br" 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:55.801 Cannot find device "nvmf_init_br2" 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:55.801 Cannot find device "nvmf_tgt_br" 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:16:55.801 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.060 Cannot find device "nvmf_tgt_br2" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.060 Cannot find device "nvmf_init_br" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.060 Cannot find device "nvmf_init_br2" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.060 Cannot find device "nvmf_tgt_br" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.060 Cannot find device "nvmf_tgt_br2" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.060 Cannot find device "nvmf_br" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:56.060 Cannot find device "nvmf_init_if" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:56.060 Cannot find device "nvmf_init_if2" 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:16:56.060 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.061 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:56.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:56.320 00:16:56.320 --- 10.0.0.3 ping statistics --- 00:16:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.320 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:56.320 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:56.320 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:56.320 00:16:56.320 --- 10.0.0.4 ping statistics --- 00:16:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.320 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:56.320 00:16:56.320 --- 10.0.0.1 ping statistics --- 00:16:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.320 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:56.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:16:56.320 00:16:56.320 --- 10.0.0.2 ping statistics --- 00:16:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.320 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80064 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80064 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80064 ']' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.320 12:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:56.320 [2024-12-06 12:24:42.910164] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:16:56.320 [2024-12-06 12:24:42.910271] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.580 [2024-12-06 12:24:43.065712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:56.580 [2024-12-06 12:24:43.104608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.580 [2024-12-06 12:24:43.104670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.580 [2024-12-06 12:24:43.104684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.580 [2024-12-06 12:24:43.104694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.580 [2024-12-06 12:24:43.104703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.580 [2024-12-06 12:24:43.105623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.580 [2024-12-06 12:24:43.105638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.580 [2024-12-06 12:24:43.141382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:56.580 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.580 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:56.580 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.580 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.580 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:56.839 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.839 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80064 00:16:56.839 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:57.097 [2024-12-06 12:24:43.525960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.097 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:57.356 Malloc0 00:16:57.356 12:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:57.613 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.871 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:58.129 [2024-12-06 12:24:44.652355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:58.129 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:58.388 [2024-12-06 12:24:44.884444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80111 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80111 /var/tmp/bdevperf.sock 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80111 ']' 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.388 12:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:59.327 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.327 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:59.327 12:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:59.585 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:59.843 Nvme0n1 00:16:59.843 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:00.410 Nvme0n1 00:17:00.410 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:00.410 12:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:01.346 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:01.346 12:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:01.604 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:01.862 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:01.862 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80162 00:17:01.862 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.862 12:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.427 Attaching 4 probes... 00:17:08.427 @path[10.0.0.3, 4421]: 20441 00:17:08.427 @path[10.0.0.3, 4421]: 20808 00:17:08.427 @path[10.0.0.3, 4421]: 20952 00:17:08.427 @path[10.0.0.3, 4421]: 21140 00:17:08.427 @path[10.0.0.3, 4421]: 20964 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80162 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:08.427 12:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:08.685 12:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:08.685 12:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80271 00:17:08.685 12:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:08.685 12:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.305 Attaching 4 probes... 00:17:15.305 @path[10.0.0.3, 4420]: 20793 00:17:15.305 @path[10.0.0.3, 4420]: 21107 00:17:15.305 @path[10.0.0.3, 4420]: 21198 00:17:15.305 @path[10.0.0.3, 4420]: 21119 00:17:15.305 @path[10.0.0.3, 4420]: 21030 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80271 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:15.305 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:15.563 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:15.563 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80389 00:17:15.563 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:15.563 12:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:22.164 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:22.164 12:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.164 Attaching 4 probes... 00:17:22.164 @path[10.0.0.3, 4421]: 15328 00:17:22.164 @path[10.0.0.3, 4421]: 20827 00:17:22.164 @path[10.0.0.3, 4421]: 20784 00:17:22.164 @path[10.0.0.3, 4421]: 20605 00:17:22.164 @path[10.0.0.3, 4421]: 20671 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80389 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80507 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:22.164 12:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.727 12:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.727 12:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.727 Attaching 4 probes... 00:17:28.727 00:17:28.727 00:17:28.727 00:17:28.727 00:17:28.727 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80507 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:28.727 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:28.985 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:28.985 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:28.985 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80619 00:17:28.985 12:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.542 Attaching 4 probes... 00:17:35.542 @path[10.0.0.3, 4421]: 20399 00:17:35.542 @path[10.0.0.3, 4421]: 20559 00:17:35.542 @path[10.0.0.3, 4421]: 20756 00:17:35.542 @path[10.0.0.3, 4421]: 20699 00:17:35.542 @path[10.0.0.3, 4421]: 20447 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80619 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.542 12:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:35.801 12:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:36.737 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:36.737 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80743 00:17:36.737 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:36.737 12:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.302 Attaching 4 probes... 00:17:43.302 @path[10.0.0.3, 4420]: 20272 00:17:43.302 @path[10.0.0.3, 4420]: 20457 00:17:43.302 @path[10.0.0.3, 4420]: 20389 00:17:43.302 @path[10.0.0.3, 4420]: 20354 00:17:43.302 @path[10.0.0.3, 4420]: 20524 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80743 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:43.302 [2024-12-06 12:25:29.762395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:43.302 12:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:43.560 12:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:50.124 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:50.124 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80917 00:17:50.124 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80064 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:50.124 12:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:55.395 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:55.395 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:55.654 Attaching 4 probes... 00:17:55.654 @path[10.0.0.3, 4421]: 19968 00:17:55.654 @path[10.0.0.3, 4421]: 20384 00:17:55.654 @path[10.0.0.3, 4421]: 20450 00:17:55.654 @path[10.0.0.3, 4421]: 20161 00:17:55.654 @path[10.0.0.3, 4421]: 20403 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80917 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80111 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80111 ']' 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80111 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.654 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80111 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80111' 00:17:55.927 killing process with pid 80111 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80111 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80111 00:17:55.927 { 00:17:55.927 "results": [ 00:17:55.927 { 00:17:55.927 "job": "Nvme0n1", 00:17:55.927 "core_mask": "0x4", 00:17:55.927 "workload": "verify", 00:17:55.927 "status": "terminated", 00:17:55.927 "verify_range": { 00:17:55.927 "start": 0, 00:17:55.927 "length": 16384 00:17:55.927 }, 00:17:55.927 "queue_depth": 128, 00:17:55.927 "io_size": 4096, 00:17:55.927 "runtime": 55.395763, 00:17:55.927 "iops": 8732.617330318206, 00:17:55.927 "mibps": 34.11178644655549, 00:17:55.927 "io_failed": 0, 00:17:55.927 "io_timeout": 0, 00:17:55.927 "avg_latency_us": 14627.439923657035, 00:17:55.927 "min_latency_us": 983.04, 00:17:55.927 "max_latency_us": 7015926.69090909 00:17:55.927 } 00:17:55.927 ], 00:17:55.927 "core_count": 1 00:17:55.927 } 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80111 00:17:55.927 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:55.927 [2024-12-06 12:24:44.958419] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:17:55.927 [2024-12-06 12:24:44.958522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80111 ] 00:17:55.927 [2024-12-06 12:24:45.105103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.927 [2024-12-06 12:24:45.134477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.927 [2024-12-06 12:24:45.163833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.927 Running I/O for 90 seconds... 00:17:55.927 8084.00 IOPS, 31.58 MiB/s [2024-12-06T12:25:42.585Z] 8630.50 IOPS, 33.71 MiB/s [2024-12-06T12:25:42.585Z] 9231.00 IOPS, 36.06 MiB/s [2024-12-06T12:25:42.585Z] 9531.25 IOPS, 37.23 MiB/s [2024-12-06T12:25:42.585Z] 9718.60 IOPS, 37.96 MiB/s [2024-12-06T12:25:42.585Z] 9859.50 IOPS, 38.51 MiB/s [2024-12-06T12:25:42.585Z] 9949.29 IOPS, 38.86 MiB/s [2024-12-06T12:25:42.585Z] 9981.62 IOPS, 38.99 MiB/s [2024-12-06T12:25:42.585Z] [2024-12-06 12:24:55.128928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.927 [2024-12-06 12:24:55.128980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:55.927 [2024-12-06 12:24:55.129047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.927 [2024-12-06 12:24:55.129066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:55.927 [2024-12-06 12:24:55.129086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.928 [2024-12-06 12:24:55.129598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.129975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.129989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.928 [2024-12-06 12:24:55.130409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.928 [2024-12-06 12:24:55.130423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.130716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.130972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.929 [2024-12-06 12:24:55.130985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.929 [2024-12-06 12:24:55.131507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.929 [2024-12-06 12:24:55.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.131556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.131590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.930 [2024-12-06 12:24:55.131889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.131953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.131973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.131987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:55.930 [2024-12-06 12:24:55.132591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.930 [2024-12-06 12:24:55.132621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.132970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.132989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.133004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.133051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.133083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.133102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.133116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.133135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.133149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.931 [2024-12-06 12:24:55.134551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.134967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.134986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.931 [2024-12-06 12:24:55.135027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:55.931 [2024-12-06 12:24:55.135049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:24:55.135064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:24:55.135083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:24:55.135097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:55.932 9985.56 IOPS, 39.01 MiB/s [2024-12-06T12:25:42.590Z] 10043.90 IOPS, 39.23 MiB/s [2024-12-06T12:25:42.590Z] 10083.27 IOPS, 39.39 MiB/s [2024-12-06T12:25:42.590Z] 10132.50 IOPS, 39.58 MiB/s [2024-12-06T12:25:42.590Z] 10170.31 IOPS, 39.73 MiB/s [2024-12-06T12:25:42.590Z] 10195.29 IOPS, 39.83 MiB/s [2024-12-06T12:25:42.590Z] [2024-12-06 12:25:01.727443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.727808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.727839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.727893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.727924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.727955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.727973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.727986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.728079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.932 [2024-12-06 12:25:01.728359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.728396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.728428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.728459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:55.932 [2024-12-06 12:25:01.728477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.932 [2024-12-06 12:25:01.728490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.933 [2024-12-06 12:25:01.728521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.933 [2024-12-06 12:25:01.728552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.933 [2024-12-06 12:25:01.728583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.933 [2024-12-06 12:25:01.728613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.728983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.728996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.933 [2024-12-06 12:25:01.729285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:55.933 [2024-12-06 12:25:01.729304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.729695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.729970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.729983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.934 [2024-12-06 12:25:01.730214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.730249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.934 [2024-12-06 12:25:01.730281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.934 [2024-12-06 12:25:01.730299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.935 [2024-12-06 12:25:01.730318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.935 [2024-12-06 12:25:01.730351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.935 [2024-12-06 12:25:01.730382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.935 [2024-12-06 12:25:01.730413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.935 [2024-12-06 12:25:01.730444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.935 [2024-12-06 12:25:01.730475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.730968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.730981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.731811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.731837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.731866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.731881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.731906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.731930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.731956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.731970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.731994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.732007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.732048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.732073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.935 [2024-12-06 12:25:01.732086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:55.935 [2024-12-06 12:25:01.732126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.936 [2024-12-06 12:25:01.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:01.732791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:01.732805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:55.936 10053.20 IOPS, 39.27 MiB/s [2024-12-06T12:25:42.594Z] 9565.56 IOPS, 37.37 MiB/s [2024-12-06T12:25:42.594Z] 9614.18 IOPS, 37.56 MiB/s [2024-12-06T12:25:42.594Z] 9656.06 IOPS, 37.72 MiB/s [2024-12-06T12:25:42.594Z] 9698.05 IOPS, 37.88 MiB/s [2024-12-06T12:25:42.594Z] 9729.65 IOPS, 38.01 MiB/s [2024-12-06T12:25:42.594Z] 9757.33 IOPS, 38.11 MiB/s [2024-12-06T12:25:42.594Z] [2024-12-06 12:25:08.732439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.936 [2024-12-06 12:25:08.732900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:55.936 [2024-12-06 12:25:08.732919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.732932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.732950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.732963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.732982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.732995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.937 [2024-12-06 12:25:08.733648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:55.937 [2024-12-06 12:25:08.733955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.937 [2024-12-06 12:25:08.733968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.938 [2024-12-06 12:25:08.734838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.734970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.734984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.735003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.735017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.735036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.735049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.735069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.735082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.938 [2024-12-06 12:25:08.735102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.938 [2024-12-06 12:25:08.735115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.939 [2024-12-06 12:25:08.735147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.735968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.735987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.736001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:55.939 [2024-12-06 12:25:08.736020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.939 [2024-12-06 12:25:08.736033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.736358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.736783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.736797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.737439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.737486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.940 [2024-12-06 12:25:08.737527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.737567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.737607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.737646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.737686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.940 [2024-12-06 12:25:08.737736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:55.940 [2024-12-06 12:25:08.737764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:08.737778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:08.737804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:08.737821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:08.737862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:08.737881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:55.941 9669.86 IOPS, 37.77 MiB/s [2024-12-06T12:25:42.599Z] 9249.43 IOPS, 36.13 MiB/s [2024-12-06T12:25:42.599Z] 8864.04 IOPS, 34.63 MiB/s [2024-12-06T12:25:42.599Z] 8509.48 IOPS, 33.24 MiB/s [2024-12-06T12:25:42.599Z] 8182.19 IOPS, 31.96 MiB/s [2024-12-06T12:25:42.599Z] 7879.15 IOPS, 30.78 MiB/s [2024-12-06T12:25:42.599Z] 7597.75 IOPS, 29.68 MiB/s [2024-12-06T12:25:42.599Z] 7407.17 IOPS, 28.93 MiB/s [2024-12-06T12:25:42.599Z] 7501.60 IOPS, 29.30 MiB/s [2024-12-06T12:25:42.599Z] 7591.48 IOPS, 29.65 MiB/s [2024-12-06T12:25:42.599Z] 7680.00 IOPS, 30.00 MiB/s [2024-12-06T12:25:42.599Z] 7760.73 IOPS, 30.32 MiB/s [2024-12-06T12:25:42.599Z] 7831.76 IOPS, 30.59 MiB/s [2024-12-06T12:25:42.599Z] 7898.74 IOPS, 30.85 MiB/s [2024-12-06T12:25:42.599Z] [2024-12-06 12:25:22.186673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.186980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.186993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.187005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.187029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.941 [2024-12-06 12:25:22.187054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.941 [2024-12-06 12:25:22.187453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.941 [2024-12-06 12:25:22.187466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.187479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.187504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.187531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.942 [2024-12-06 12:25:22.187970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.187984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.187996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.942 [2024-12-06 12:25:22.188292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.942 [2024-12-06 12:25:22.188303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.188654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.188976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.188988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.189002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.189014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.189027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.189039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.189052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.943 [2024-12-06 12:25:22.189070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.189084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.189096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.189110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.943 [2024-12-06 12:25:22.189122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.943 [2024-12-06 12:25:22.189135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.944 [2024-12-06 12:25:22.189825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.944 [2024-12-06 12:25:22.189941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.944 [2024-12-06 12:25:22.189953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.189967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.189979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.189992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.945 [2024-12-06 12:25:22.190231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.190284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.945 [2024-12-06 12:25:22.190302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.945 [2024-12-06 12:25:22.190312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3256 len:8 PRP1 0x0 PRP2 0x0 00:17:55.945 [2024-12-06 12:25:22.190324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.945 [2024-12-06 12:25:22.191378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:55.945 [2024-12-06 12:25:22.191462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeab1e0 (9): Bad file descriptor 00:17:55.945 [2024-12-06 12:25:22.191807] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.945 [2024-12-06 12:25:22.191837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeab1e0 with addr=10.0.0.3, port=4421 00:17:55.945 [2024-12-06 12:25:22.191853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab1e0 is same with the state(6) to be set 00:17:55.945 [2024-12-06 12:25:22.191900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeab1e0 (9): Bad file descriptor 00:17:55.945 [2024-12-06 12:25:22.191933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:55.945 [2024-12-06 12:25:22.191948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:55.945 [2024-12-06 12:25:22.191962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:55.945 [2024-12-06 12:25:22.191974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:55.945 [2024-12-06 12:25:22.191988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:55.945 7960.97 IOPS, 31.10 MiB/s [2024-12-06T12:25:42.603Z] 8012.19 IOPS, 31.30 MiB/s [2024-12-06T12:25:42.603Z] 8073.24 IOPS, 31.54 MiB/s [2024-12-06T12:25:42.603Z] 8129.51 IOPS, 31.76 MiB/s [2024-12-06T12:25:42.603Z] 8182.27 IOPS, 31.96 MiB/s [2024-12-06T12:25:42.603Z] 8231.78 IOPS, 32.16 MiB/s [2024-12-06T12:25:42.603Z] 8280.45 IOPS, 32.35 MiB/s [2024-12-06T12:25:42.603Z] 8319.51 IOPS, 32.50 MiB/s [2024-12-06T12:25:42.603Z] 8360.98 IOPS, 32.66 MiB/s [2024-12-06T12:25:42.603Z] 8403.80 IOPS, 32.83 MiB/s [2024-12-06T12:25:42.603Z] [2024-12-06 12:25:32.251857] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:55.945 8442.91 IOPS, 32.98 MiB/s [2024-12-06T12:25:42.603Z] 8481.15 IOPS, 33.13 MiB/s [2024-12-06T12:25:42.603Z] 8517.79 IOPS, 33.27 MiB/s [2024-12-06T12:25:42.603Z] 8553.59 IOPS, 33.41 MiB/s [2024-12-06T12:25:42.603Z] 8580.28 IOPS, 33.52 MiB/s [2024-12-06T12:25:42.603Z] 8611.41 IOPS, 33.64 MiB/s [2024-12-06T12:25:42.603Z] 8643.50 IOPS, 33.76 MiB/s [2024-12-06T12:25:42.603Z] 8673.02 IOPS, 33.88 MiB/s [2024-12-06T12:25:42.603Z] 8699.52 IOPS, 33.98 MiB/s [2024-12-06T12:25:42.603Z] 8726.95 IOPS, 34.09 MiB/s [2024-12-06T12:25:42.603Z] Received shutdown signal, test time was about 55.396559 seconds 00:17:55.945 00:17:55.945 Latency(us) 00:17:55.945 [2024-12-06T12:25:42.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.945 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.945 Verification LBA range: start 0x0 length 0x4000 00:17:55.945 Nvme0n1 : 55.40 8732.62 34.11 0.00 0.00 14627.44 983.04 7015926.69 00:17:55.945 [2024-12-06T12:25:42.603Z] =================================================================================================================== 00:17:55.945 [2024-12-06T12:25:42.603Z] Total : 8732.62 34.11 0.00 0.00 14627.44 983.04 7015926.69 00:17:55.945 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.205 rmmod nvme_tcp 00:17:56.205 rmmod nvme_fabrics 00:17:56.205 rmmod nvme_keyring 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80064 ']' 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80064 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80064 ']' 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80064 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80064 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.205 killing process with pid 80064 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80064' 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80064 00:17:56.205 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80064 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:56.464 12:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:56.464 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:17:56.724 00:17:56.724 real 1m1.055s 00:17:56.724 user 2m49.541s 00:17:56.724 sys 0m17.901s 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:56.724 ************************************ 00:17:56.724 END TEST nvmf_host_multipath 00:17:56.724 ************************************ 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.724 ************************************ 00:17:56.724 START TEST nvmf_timeout 00:17:56.724 ************************************ 00:17:56.724 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:56.724 * Looking for test storage... 00:17:56.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.984 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:56.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.985 --rc genhtml_branch_coverage=1 00:17:56.985 --rc genhtml_function_coverage=1 00:17:56.985 --rc genhtml_legend=1 00:17:56.985 --rc geninfo_all_blocks=1 00:17:56.985 --rc geninfo_unexecuted_blocks=1 00:17:56.985 00:17:56.985 ' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:56.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.985 --rc genhtml_branch_coverage=1 00:17:56.985 --rc genhtml_function_coverage=1 00:17:56.985 --rc genhtml_legend=1 00:17:56.985 --rc geninfo_all_blocks=1 00:17:56.985 --rc geninfo_unexecuted_blocks=1 00:17:56.985 00:17:56.985 ' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:56.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.985 --rc genhtml_branch_coverage=1 00:17:56.985 --rc genhtml_function_coverage=1 00:17:56.985 --rc genhtml_legend=1 00:17:56.985 --rc geninfo_all_blocks=1 00:17:56.985 --rc geninfo_unexecuted_blocks=1 00:17:56.985 00:17:56.985 ' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:56.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.985 --rc genhtml_branch_coverage=1 00:17:56.985 --rc genhtml_function_coverage=1 00:17:56.985 --rc genhtml_legend=1 00:17:56.985 --rc geninfo_all_blocks=1 00:17:56.985 --rc geninfo_unexecuted_blocks=1 00:17:56.985 00:17:56.985 ' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:56.985 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:56.986 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:56.986 Cannot find device "nvmf_init_br" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:56.986 Cannot find device "nvmf_init_br2" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:56.986 Cannot find device "nvmf_tgt_br" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.986 Cannot find device "nvmf_tgt_br2" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:56.986 Cannot find device "nvmf_init_br" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:56.986 Cannot find device "nvmf_init_br2" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:56.986 Cannot find device "nvmf_tgt_br" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:56.986 Cannot find device "nvmf_tgt_br2" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:56.986 Cannot find device "nvmf_br" 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:17:56.986 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:57.244 Cannot find device "nvmf_init_if" 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:57.244 Cannot find device "nvmf_init_if2" 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:57.244 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.245 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:57.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:57.503 00:17:57.503 --- 10.0.0.3 ping statistics --- 00:17:57.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.503 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:57.503 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:57.503 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:57.503 00:17:57.503 --- 10.0.0.4 ping statistics --- 00:17:57.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.503 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:57.503 00:17:57.503 --- 10.0.0.1 ping statistics --- 00:17:57.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.503 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:57.503 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:57.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:57.504 00:17:57.504 --- 10.0.0.2 ping statistics --- 00:17:57.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.504 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81280 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81280 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81280 ']' 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.504 12:25:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:57.504 [2024-12-06 12:25:44.022214] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:17:57.504 [2024-12-06 12:25:44.022323] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.763 [2024-12-06 12:25:44.161893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:57.763 [2024-12-06 12:25:44.187934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.763 [2024-12-06 12:25:44.188002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.763 [2024-12-06 12:25:44.188027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.763 [2024-12-06 12:25:44.188034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.763 [2024-12-06 12:25:44.188040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.763 [2024-12-06 12:25:44.188872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.763 [2024-12-06 12:25:44.188881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.763 [2024-12-06 12:25:44.215917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.329 12:25:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.329 12:25:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:58.329 12:25:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.329 12:25:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.329 12:25:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:58.593 12:25:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.593 12:25:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.593 12:25:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.850 [2024-12-06 12:25:45.274039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.850 12:25:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:59.108 Malloc0 00:17:59.108 12:25:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.365 12:25:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:59.624 [2024-12-06 12:25:46.226274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81329 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81329 /var/tmp/bdevperf.sock 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81329 ']' 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.624 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:59.882 [2024-12-06 12:25:46.285990] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:17:59.882 [2024-12-06 12:25:46.286073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81329 ] 00:17:59.882 [2024-12-06 12:25:46.424210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.882 [2024-12-06 12:25:46.453752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.882 [2024-12-06 12:25:46.482238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.140 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.140 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:00.140 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:00.140 12:25:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:00.705 NVMe0n1 00:18:00.705 12:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81341 00:18:00.705 12:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:00.705 12:25:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:00.705 Running I/O for 10 seconds... 00:18:01.641 12:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:01.903 7957.00 IOPS, 31.08 MiB/s [2024-12-06T12:25:48.561Z] [2024-12-06 12:25:48.399934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.399990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.903 [2024-12-06 12:25:48.400119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.903 [2024-12-06 12:25:48.400127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.904 [2024-12-06 12:25:48.400743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.904 [2024-12-06 12:25:48.400845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.904 [2024-12-06 12:25:48.400854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.905 [2024-12-06 12:25:48.400879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.400983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.400992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.905 [2024-12-06 12:25:48.401547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.905 [2024-12-06 12:25:48.401554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.401985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.906 [2024-12-06 12:25:48.402238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.906 [2024-12-06 12:25:48.402247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.907 [2024-12-06 12:25:48.402256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.907 [2024-12-06 12:25:48.402265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.907 [2024-12-06 12:25:48.402273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.907 [2024-12-06 12:25:48.402282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb57690 is same with the state(6) to be set 00:18:01.907 [2024-12-06 12:25:48.402294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:01.907 [2024-12-06 12:25:48.402300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:01.907 [2024-12-06 12:25:48.402307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73680 len:8 PRP1 0x0 PRP2 0x0 00:18:01.907 [2024-12-06 12:25:48.402317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.907 [2024-12-06 12:25:48.402559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:01.907 [2024-12-06 12:25:48.402628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7e50 (9): Bad file descriptor 00:18:01.907 [2024-12-06 12:25:48.402719] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:01.907 [2024-12-06 12:25:48.402738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf7e50 with addr=10.0.0.3, port=4420 00:18:01.907 [2024-12-06 12:25:48.402748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7e50 is same with the state(6) to be set 00:18:01.907 [2024-12-06 12:25:48.402764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7e50 (9): Bad file descriptor 00:18:01.907 [2024-12-06 12:25:48.402778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:01.907 [2024-12-06 12:25:48.402786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:01.907 [2024-12-06 12:25:48.402795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:01.907 [2024-12-06 12:25:48.402804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:01.907 [2024-12-06 12:25:48.402813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:01.907 12:25:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:03.779 4554.50 IOPS, 17.79 MiB/s [2024-12-06T12:25:50.437Z] 3036.33 IOPS, 11.86 MiB/s [2024-12-06T12:25:50.437Z] [2024-12-06 12:25:50.402985] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.779 [2024-12-06 12:25:50.403050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf7e50 with addr=10.0.0.3, port=4420 00:18:03.779 [2024-12-06 12:25:50.403064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7e50 is same with the state(6) to be set 00:18:03.779 [2024-12-06 12:25:50.403086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7e50 (9): Bad file descriptor 00:18:03.779 [2024-12-06 12:25:50.403105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:03.779 [2024-12-06 12:25:50.403114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:03.779 [2024-12-06 12:25:50.403124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:03.779 [2024-12-06 12:25:50.403133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:03.779 [2024-12-06 12:25:50.403143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:03.779 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:03.779 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:03.779 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:04.039 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:04.039 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:04.330 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:04.330 12:25:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:04.595 12:25:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:04.595 12:25:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:05.786 2277.25 IOPS, 8.90 MiB/s [2024-12-06T12:25:52.444Z] 1821.80 IOPS, 7.12 MiB/s [2024-12-06T12:25:52.444Z] [2024-12-06 12:25:52.403370] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.786 [2024-12-06 12:25:52.403432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf7e50 with addr=10.0.0.3, port=4420 00:18:05.786 [2024-12-06 12:25:52.403448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7e50 is same with the state(6) to be set 00:18:05.786 [2024-12-06 12:25:52.403470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7e50 (9): Bad file descriptor 00:18:05.786 [2024-12-06 12:25:52.403489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:05.786 [2024-12-06 12:25:52.403498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:05.786 [2024-12-06 12:25:52.403508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:05.786 [2024-12-06 12:25:52.403518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:05.786 [2024-12-06 12:25:52.403529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:07.655 1518.17 IOPS, 5.93 MiB/s [2024-12-06T12:25:54.572Z] 1301.29 IOPS, 5.08 MiB/s [2024-12-06T12:25:54.572Z] [2024-12-06 12:25:54.403696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:07.914 [2024-12-06 12:25:54.403744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:07.914 [2024-12-06 12:25:54.403755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:07.914 [2024-12-06 12:25:54.403764] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:07.914 [2024-12-06 12:25:54.403774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:08.848 1138.62 IOPS, 4.45 MiB/s 00:18:08.848 Latency(us) 00:18:08.848 [2024-12-06T12:25:55.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.848 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.848 Verification LBA range: start 0x0 length 0x4000 00:18:08.848 NVMe0n1 : 8.15 1118.00 4.37 15.71 0.00 112760.18 3455.53 7015926.69 00:18:08.848 [2024-12-06T12:25:55.506Z] =================================================================================================================== 00:18:08.848 [2024-12-06T12:25:55.506Z] Total : 1118.00 4.37 15.71 0.00 112760.18 3455.53 7015926.69 00:18:08.848 { 00:18:08.848 "results": [ 00:18:08.848 { 00:18:08.848 "job": "NVMe0n1", 00:18:08.848 "core_mask": "0x4", 00:18:08.848 "workload": "verify", 00:18:08.848 "status": "finished", 00:18:08.848 "verify_range": { 00:18:08.848 "start": 0, 00:18:08.848 "length": 16384 00:18:08.848 }, 00:18:08.848 "queue_depth": 128, 00:18:08.848 "io_size": 4096, 00:18:08.848 "runtime": 8.147591, 00:18:08.848 "iops": 1117.9991730070888, 00:18:08.848 "mibps": 4.3671842695589405, 00:18:08.848 "io_failed": 128, 00:18:08.848 "io_timeout": 0, 00:18:08.848 "avg_latency_us": 112760.18109933371, 00:18:08.848 "min_latency_us": 3455.5345454545454, 00:18:08.848 "max_latency_us": 7015926.69090909 00:18:08.848 } 00:18:08.848 ], 00:18:08.848 "core_count": 1 00:18:08.848 } 00:18:09.412 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:09.412 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:09.412 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:09.670 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:09.670 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:09.670 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:09.670 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:09.928 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:09.928 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81341 00:18:09.928 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81329 00:18:09.928 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81329 ']' 00:18:09.928 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81329 00:18:09.928 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81329 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.186 killing process with pid 81329 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81329' 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81329 00:18:10.186 Received shutdown signal, test time was about 9.359780 seconds 00:18:10.186 00:18:10.186 Latency(us) 00:18:10.186 [2024-12-06T12:25:56.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.186 [2024-12-06T12:25:56.844Z] =================================================================================================================== 00:18:10.186 [2024-12-06T12:25:56.844Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81329 00:18:10.186 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:10.443 [2024-12-06 12:25:56.949942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81469 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81469 /var/tmp/bdevperf.sock 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81469 ']' 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.443 12:25:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:10.443 [2024-12-06 12:25:57.016025] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:18:10.443 [2024-12-06 12:25:57.016106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81469 ] 00:18:10.701 [2024-12-06 12:25:57.155937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.701 [2024-12-06 12:25:57.185559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.701 [2024-12-06 12:25:57.213740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.701 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.701 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:10.701 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:10.959 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:11.218 NVMe0n1 00:18:11.218 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.218 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81480 00:18:11.218 12:25:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:11.476 Running I/O for 10 seconds... 00:18:12.412 12:25:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:12.412 7957.00 IOPS, 31.08 MiB/s [2024-12-06T12:25:59.070Z] [2024-12-06 12:25:59.045562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.045865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b93aa0 is same with the state(6) to be set 00:18:12.412 [2024-12-06 12:25:59.047892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.047921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.047940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.047951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.047961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.047985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.047996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.412 [2024-12-06 12:25:59.048122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.412 [2024-12-06 12:25:59.048132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.413 [2024-12-06 12:25:59.048795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.413 [2024-12-06 12:25:59.048812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f690 is same with the state(6) to be set 00:18:12.413 [2024-12-06 12:25:59.048832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.413 [2024-12-06 12:25:59.048839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.413 [2024-12-06 12:25:59.048846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72864 len:8 PRP1 0x0 PRP2 0x0 00:18:12.413 [2024-12-06 12:25:59.048854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.413 [2024-12-06 12:25:59.048870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.413 [2024-12-06 12:25:59.048882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72992 len:8 PRP1 0x0 PRP2 0x0 00:18:12.413 [2024-12-06 12:25:59.048891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.413 [2024-12-06 12:25:59.048906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.413 [2024-12-06 12:25:59.048912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73000 len:8 PRP1 0x0 PRP2 0x0 00:18:12.413 [2024-12-06 12:25:59.048920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.413 [2024-12-06 12:25:59.048928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.048935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.048942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73008 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.048950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.048958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.048964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.048971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73016 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.048979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.048987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.048993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73024 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73032 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73040 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73048 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73056 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73064 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73072 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73080 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73088 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73096 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73104 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73112 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73120 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73128 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73136 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73144 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73152 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73160 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73168 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73176 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73184 len:8 PRP1 0x0 PRP2 0x0 00:18:12.414 [2024-12-06 12:25:59.049710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.414 [2024-12-06 12:25:59.049719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.414 [2024-12-06 12:25:59.049725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.414 [2024-12-06 12:25:59.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73192 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73200 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73208 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73216 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73224 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73232 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73240 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73248 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.049974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.049981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.049988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73256 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.049996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73264 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73272 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73280 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73288 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73296 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73304 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73312 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73320 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73328 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73336 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73344 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73352 len:8 PRP1 0x0 PRP2 0x0 00:18:12.415 [2024-12-06 12:25:59.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.415 [2024-12-06 12:25:59.050423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.415 [2024-12-06 12:25:59.050430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.415 [2024-12-06 12:25:59.050437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73360 len:8 PRP1 0x0 PRP2 0x0 00:18:12.674 12:25:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:12.674 [2024-12-06 12:25:59.063603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.674 [2024-12-06 12:25:59.063648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.674 [2024-12-06 12:25:59.063658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.674 [2024-12-06 12:25:59.063681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73368 len:8 PRP1 0x0 PRP2 0x0 00:18:12.674 [2024-12-06 12:25:59.063690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.674 [2024-12-06 12:25:59.063699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.674 [2024-12-06 12:25:59.063706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.674 [2024-12-06 12:25:59.063715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73376 len:8 PRP1 0x0 PRP2 0x0 00:18:12.674 [2024-12-06 12:25:59.063724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.674 [2024-12-06 12:25:59.063747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.674 [2024-12-06 12:25:59.063754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.674 [2024-12-06 12:25:59.063761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73384 len:8 PRP1 0x0 PRP2 0x0 00:18:12.674 [2024-12-06 12:25:59.063769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.674 [2024-12-06 12:25:59.063778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.674 [2024-12-06 12:25:59.063784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73392 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.063822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.063828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73400 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.063851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.063857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73408 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.063880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.063887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73416 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.063910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.063916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73424 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.063941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.063947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73432 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.063970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.063976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.063984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73440 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.063992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73448 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73456 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73464 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73472 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73480 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73488 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73496 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73504 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73512 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73520 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73528 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73536 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73544 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73552 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73560 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.675 [2024-12-06 12:25:59.064596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73568 len:8 PRP1 0x0 PRP2 0x0 00:18:12.675 [2024-12-06 12:25:59.064604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.675 [2024-12-06 12:25:59.064613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.675 [2024-12-06 12:25:59.064620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73576 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73584 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73592 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73600 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73608 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73616 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73624 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73632 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73640 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.064935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.676 [2024-12-06 12:25:59.064942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.676 [2024-12-06 12:25:59.064949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73648 len:8 PRP1 0x0 PRP2 0x0 00:18:12.676 [2024-12-06 12:25:59.064956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.065111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.676 [2024-12-06 12:25:59.065128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.065154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.676 [2024-12-06 12:25:59.065163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.065172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.676 [2024-12-06 12:25:59.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.065225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.676 [2024-12-06 12:25:59.065235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.676 [2024-12-06 12:25:59.065244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:12.676 [2024-12-06 12:25:59.065526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:12.676 [2024-12-06 12:25:59.065581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:12.676 [2024-12-06 12:25:59.065675] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.676 [2024-12-06 12:25:59.065724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe50 with addr=10.0.0.3, port=4420 00:18:12.676 [2024-12-06 12:25:59.065734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:12.676 [2024-12-06 12:25:59.065750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:12.676 [2024-12-06 12:25:59.065764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:12.676 [2024-12-06 12:25:59.065773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:12.676 [2024-12-06 12:25:59.065782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:12.676 [2024-12-06 12:25:59.065791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:12.676 [2024-12-06 12:25:59.065801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:13.610 4539.50 IOPS, 17.73 MiB/s [2024-12-06T12:26:00.268Z] [2024-12-06 12:26:00.065911] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.610 12:26:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.610 [2024-12-06 12:26:00.065974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe50 with addr=10.0.0.3, port=4420 00:18:13.610 [2024-12-06 12:26:00.065988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:13.610 [2024-12-06 12:26:00.066011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:13.610 [2024-12-06 12:26:00.066029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:13.610 [2024-12-06 12:26:00.066039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:13.610 [2024-12-06 12:26:00.066049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:13.610 [2024-12-06 12:26:00.066058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:13.610 [2024-12-06 12:26:00.066069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:13.868 [2024-12-06 12:26:00.320920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:13.868 12:26:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81480 00:18:14.441 3026.33 IOPS, 11.82 MiB/s [2024-12-06T12:26:01.099Z] [2024-12-06 12:26:01.083227] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:16.302 2269.75 IOPS, 8.87 MiB/s [2024-12-06T12:26:04.333Z] 3628.60 IOPS, 14.17 MiB/s [2024-12-06T12:26:05.275Z] 4857.17 IOPS, 18.97 MiB/s [2024-12-06T12:26:06.209Z] 5718.71 IOPS, 22.34 MiB/s [2024-12-06T12:26:07.141Z] 6370.62 IOPS, 24.89 MiB/s [2024-12-06T12:26:08.075Z] 6873.67 IOPS, 26.85 MiB/s [2024-12-06T12:26:08.075Z] 7272.50 IOPS, 28.41 MiB/s 00:18:21.417 Latency(us) 00:18:21.417 [2024-12-06T12:26:08.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.417 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.417 Verification LBA range: start 0x0 length 0x4000 00:18:21.417 NVMe0n1 : 10.01 7278.97 28.43 0.00 0.00 17555.11 1325.61 3035150.89 00:18:21.417 [2024-12-06T12:26:08.075Z] =================================================================================================================== 00:18:21.417 [2024-12-06T12:26:08.075Z] Total : 7278.97 28.43 0.00 0.00 17555.11 1325.61 3035150.89 00:18:21.417 { 00:18:21.417 "results": [ 00:18:21.417 { 00:18:21.417 "job": "NVMe0n1", 00:18:21.417 "core_mask": "0x4", 00:18:21.417 "workload": "verify", 00:18:21.417 "status": "finished", 00:18:21.417 "verify_range": { 00:18:21.417 "start": 0, 00:18:21.417 "length": 16384 00:18:21.417 }, 00:18:21.417 "queue_depth": 128, 00:18:21.417 "io_size": 4096, 00:18:21.417 "runtime": 10.008703, 00:18:21.417 "iops": 7278.965116658972, 00:18:21.417 "mibps": 28.43345748694911, 00:18:21.417 "io_failed": 0, 00:18:21.417 "io_timeout": 0, 00:18:21.417 "avg_latency_us": 17555.10845106522, 00:18:21.417 "min_latency_us": 1325.6145454545454, 00:18:21.417 "max_latency_us": 3035150.8945454545 00:18:21.417 } 00:18:21.417 ], 00:18:21.417 "core_count": 1 00:18:21.417 } 00:18:21.417 12:26:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81590 00:18:21.417 12:26:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.417 12:26:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:21.417 Running I/O for 10 seconds... 00:18:22.352 12:26:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:22.613 8084.00 IOPS, 31.58 MiB/s [2024-12-06T12:26:09.271Z] [2024-12-06 12:26:09.209584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.209980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.209991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.210000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.210011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.210020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.210032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.210052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.210061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.210071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.210080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.613 [2024-12-06 12:26:09.210090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.613 [2024-12-06 12:26:09.210099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.614 [2024-12-06 12:26:09.210118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.614 [2024-12-06 12:26:09.210137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.614 [2024-12-06 12:26:09.210471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.614 [2024-12-06 12:26:09.210491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.614 [2024-12-06 12:26:09.210645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.614 [2024-12-06 12:26:09.210869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.614 [2024-12-06 12:26:09.210877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.210888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.210897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.210907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.210916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.210926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.210935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.210945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.210954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.210974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.210985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.210994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.615 [2024-12-06 12:26:09.211731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.615 [2024-12-06 12:26:09.211740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.211979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.211988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.616 [2024-12-06 12:26:09.212307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e1b0 is same with the state(6) to be set 00:18:22.616 [2024-12-06 12:26:09.212331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.616 [2024-12-06 12:26:09.212339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.616 [2024-12-06 12:26:09.212348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74704 len:8 PRP1 0x0 PRP2 0x0 00:18:22.616 [2024-12-06 12:26:09.212357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.616 [2024-12-06 12:26:09.212512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.616 [2024-12-06 12:26:09.212532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.616 [2024-12-06 12:26:09.212566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.616 [2024-12-06 12:26:09.212584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.616 [2024-12-06 12:26:09.212592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:22.616 [2024-12-06 12:26:09.212837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:22.616 [2024-12-06 12:26:09.212893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:22.616 [2024-12-06 12:26:09.213028] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.616 [2024-12-06 12:26:09.213054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe50 with addr=10.0.0.3, port=4420 00:18:22.616 [2024-12-06 12:26:09.213065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:22.616 [2024-12-06 12:26:09.213086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:22.616 [2024-12-06 12:26:09.213104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:22.616 [2024-12-06 12:26:09.213113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:22.616 [2024-12-06 12:26:09.213135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:22.617 [2024-12-06 12:26:09.213153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:22.617 [2024-12-06 12:26:09.213165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:22.617 12:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:23.810 4618.50 IOPS, 18.04 MiB/s [2024-12-06T12:26:10.468Z] [2024-12-06 12:26:10.213300] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.810 [2024-12-06 12:26:10.213382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe50 with addr=10.0.0.3, port=4420 00:18:23.810 [2024-12-06 12:26:10.213398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:23.810 [2024-12-06 12:26:10.213421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:23.810 [2024-12-06 12:26:10.213439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:23.810 [2024-12-06 12:26:10.213448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:23.810 [2024-12-06 12:26:10.213458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:23.810 [2024-12-06 12:26:10.213469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:23.810 [2024-12-06 12:26:10.213479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:24.746 3079.00 IOPS, 12.03 MiB/s [2024-12-06T12:26:11.404Z] [2024-12-06 12:26:11.213564] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.746 [2024-12-06 12:26:11.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe50 with addr=10.0.0.3, port=4420 00:18:24.746 [2024-12-06 12:26:11.213632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:24.746 [2024-12-06 12:26:11.213652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:24.746 [2024-12-06 12:26:11.213668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:24.746 [2024-12-06 12:26:11.213677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:24.746 [2024-12-06 12:26:11.213686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:24.746 [2024-12-06 12:26:11.213696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:24.746 [2024-12-06 12:26:11.213705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:25.682 2309.25 IOPS, 9.02 MiB/s [2024-12-06T12:26:12.340Z] [2024-12-06 12:26:12.216682] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.682 [2024-12-06 12:26:12.216754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afe50 with addr=10.0.0.3, port=4420 00:18:25.682 [2024-12-06 12:26:12.216767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afe50 is same with the state(6) to be set 00:18:25.682 [2024-12-06 12:26:12.217032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afe50 (9): Bad file descriptor 00:18:25.682 [2024-12-06 12:26:12.217288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:25.682 [2024-12-06 12:26:12.217311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:25.682 [2024-12-06 12:26:12.217322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:25.682 [2024-12-06 12:26:12.217331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:25.682 [2024-12-06 12:26:12.217341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:25.682 12:26:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:25.940 [2024-12-06 12:26:12.487314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:25.940 12:26:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81590 00:18:26.764 1847.40 IOPS, 7.22 MiB/s [2024-12-06T12:26:13.422Z] [2024-12-06 12:26:13.239894] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:18:28.634 3002.00 IOPS, 11.73 MiB/s [2024-12-06T12:26:16.227Z] 4118.29 IOPS, 16.09 MiB/s [2024-12-06T12:26:17.163Z] 4967.50 IOPS, 19.40 MiB/s [2024-12-06T12:26:18.130Z] 5638.67 IOPS, 22.03 MiB/s [2024-12-06T12:26:18.130Z] 6156.40 IOPS, 24.05 MiB/s 00:18:31.472 Latency(us) 00:18:31.472 [2024-12-06T12:26:18.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.472 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.472 Verification LBA range: start 0x0 length 0x4000 00:18:31.472 NVMe0n1 : 10.01 6158.23 24.06 4124.74 0.00 12422.97 700.04 3019898.88 00:18:31.472 [2024-12-06T12:26:18.130Z] =================================================================================================================== 00:18:31.472 [2024-12-06T12:26:18.130Z] Total : 6158.23 24.06 4124.74 0.00 12422.97 0.00 3019898.88 00:18:31.472 { 00:18:31.472 "results": [ 00:18:31.472 { 00:18:31.472 "job": "NVMe0n1", 00:18:31.472 "core_mask": "0x4", 00:18:31.472 "workload": "verify", 00:18:31.472 "status": "finished", 00:18:31.472 "verify_range": { 00:18:31.472 "start": 0, 00:18:31.472 "length": 16384 00:18:31.472 }, 00:18:31.472 "queue_depth": 128, 00:18:31.472 "io_size": 4096, 00:18:31.472 "runtime": 10.007419, 00:18:31.472 "iops": 6158.231208266587, 00:18:31.472 "mibps": 24.055590657291354, 00:18:31.472 "io_failed": 41278, 00:18:31.472 "io_timeout": 0, 00:18:31.472 "avg_latency_us": 12422.966331073549, 00:18:31.472 "min_latency_us": 700.0436363636363, 00:18:31.472 "max_latency_us": 3019898.88 00:18:31.472 } 00:18:31.472 ], 00:18:31.472 "core_count": 1 00:18:31.472 } 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81469 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81469 ']' 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81469 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81469 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:31.472 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81469' 00:18:31.472 killing process with pid 81469 00:18:31.472 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.472 00:18:31.472 Latency(us) 00:18:31.472 [2024-12-06T12:26:18.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.473 [2024-12-06T12:26:18.131Z] =================================================================================================================== 00:18:31.473 [2024-12-06T12:26:18.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.473 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81469 00:18:31.473 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81469 00:18:31.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81703 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81703 /var/tmp/bdevperf.sock 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81703 ']' 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.731 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:31.731 [2024-12-06 12:26:18.298686] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:18:31.731 [2024-12-06 12:26:18.298774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81703 ] 00:18:31.990 [2024-12-06 12:26:18.437048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.990 [2024-12-06 12:26:18.466142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.990 [2024-12-06 12:26:18.494377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.990 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.990 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:31.990 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81707 00:18:31.990 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81703 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:31.990 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:32.249 12:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:32.507 NVMe0n1 00:18:32.766 12:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81749 00:18:32.766 12:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.766 12:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:32.766 Running I/O for 10 seconds... 00:18:33.703 12:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:33.965 17018.00 IOPS, 66.48 MiB/s [2024-12-06T12:26:20.623Z] [2024-12-06 12:26:20.381199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.381283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.965 [2024-12-06 12:26:20.381312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.965 [2024-12-06 12:26:20.381323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.965 [2024-12-06 12:26:20.381333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.965 [2024-12-06 12:26:20.381342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.965 [2024-12-06 12:26:20.381349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.965 [2024-12-06 12:26:20.381357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.965 [2024-12-06 12:26:20.381365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.965 [2024-12-06 12:26:20.381373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999e50 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.382964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.383964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.384967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.965 [2024-12-06 12:26:20.385751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.385804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.385856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.385908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.385961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.386986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2e10 is same with the state(6) to be set 00:18:33.966 [2024-12-06 12:26:20.387826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.966 [2024-12-06 12:26:20.387989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.966 [2024-12-06 12:26:20.387996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.967 [2024-12-06 12:26:20.388745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.967 [2024-12-06 12:26:20.388755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.388986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.388996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.968 [2024-12-06 12:26:20.389475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.968 [2024-12-06 12:26:20.389485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.389993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.390001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.390011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.390019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.390029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.390037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.390046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.390055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.390064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.390072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.390082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.969 [2024-12-06 12:26:20.390091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.969 [2024-12-06 12:26:20.390100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.970 [2024-12-06 12:26:20.390109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.970 [2024-12-06 12:26:20.390118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.970 [2024-12-06 12:26:20.390126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.970 [2024-12-06 12:26:20.390136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.970 [2024-12-06 12:26:20.390144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.970 [2024-12-06 12:26:20.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.970 [2024-12-06 12:26:20.390162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.970 [2024-12-06 12:26:20.390180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.970 [2024-12-06 12:26:20.390190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.970 [2024-12-06 12:26:20.390199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06920 is same with the state(6) to be set 00:18:33.970 [2024-12-06 12:26:20.390213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.970 [2024-12-06 12:26:20.390220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.970 [2024-12-06 12:26:20.390228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61376 len:8 PRP1 0x0 PRP2 0x0 00:18:33.970 [2024-12-06 12:26:20.390235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.970 [2024-12-06 12:26:20.390541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:33.970 [2024-12-06 12:26:20.390576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999e50 (9): Bad file descriptor 00:18:33.970 [2024-12-06 12:26:20.390678] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.970 [2024-12-06 12:26:20.390699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999e50 with addr=10.0.0.3, port=4420 00:18:33.970 [2024-12-06 12:26:20.390710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999e50 is same with the state(6) to be set 00:18:33.970 [2024-12-06 12:26:20.390727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999e50 (9): Bad file descriptor 00:18:33.970 [2024-12-06 12:26:20.390743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:33.970 [2024-12-06 12:26:20.390751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:33.970 [2024-12-06 12:26:20.390761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:33.970 [2024-12-06 12:26:20.390771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:33.970 [2024-12-06 12:26:20.390780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:33.970 12:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81749 00:18:35.840 9525.00 IOPS, 37.21 MiB/s [2024-12-06T12:26:22.498Z] 6350.00 IOPS, 24.80 MiB/s [2024-12-06T12:26:22.498Z] [2024-12-06 12:26:22.390964] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.840 [2024-12-06 12:26:22.391172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999e50 with addr=10.0.0.3, port=4420 00:18:35.840 [2024-12-06 12:26:22.391464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999e50 is same with the state(6) to be set 00:18:35.840 [2024-12-06 12:26:22.391676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999e50 (9): Bad file descriptor 00:18:35.840 [2024-12-06 12:26:22.391931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:35.840 [2024-12-06 12:26:22.392121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:35.840 [2024-12-06 12:26:22.392283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:35.840 [2024-12-06 12:26:22.392337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:35.840 [2024-12-06 12:26:22.392453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:37.715 4762.50 IOPS, 18.60 MiB/s [2024-12-06T12:26:24.631Z] 3810.00 IOPS, 14.88 MiB/s [2024-12-06T12:26:24.631Z] [2024-12-06 12:26:24.392596] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:37.973 [2024-12-06 12:26:24.392659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x999e50 with addr=10.0.0.3, port=4420 00:18:37.973 [2024-12-06 12:26:24.392675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x999e50 is same with the state(6) to be set 00:18:37.973 [2024-12-06 12:26:24.392697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x999e50 (9): Bad file descriptor 00:18:37.973 [2024-12-06 12:26:24.392715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:37.973 [2024-12-06 12:26:24.392724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:37.973 [2024-12-06 12:26:24.392734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:37.973 [2024-12-06 12:26:24.392743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:37.973 [2024-12-06 12:26:24.392754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:39.839 3175.00 IOPS, 12.40 MiB/s [2024-12-06T12:26:26.497Z] 2721.43 IOPS, 10.63 MiB/s [2024-12-06T12:26:26.497Z] [2024-12-06 12:26:26.392811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:39.839 [2024-12-06 12:26:26.392849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:39.839 [2024-12-06 12:26:26.392875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:39.839 [2024-12-06 12:26:26.392884] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:18:39.839 [2024-12-06 12:26:26.392894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:40.772 2381.25 IOPS, 9.30 MiB/s 00:18:40.772 Latency(us) 00:18:40.772 [2024-12-06T12:26:27.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.772 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:40.772 NVMe0n1 : 8.14 2341.73 9.15 15.73 0.00 54382.51 7060.01 7046430.72 00:18:40.772 [2024-12-06T12:26:27.430Z] =================================================================================================================== 00:18:40.772 [2024-12-06T12:26:27.430Z] Total : 2341.73 9.15 15.73 0.00 54382.51 7060.01 7046430.72 00:18:40.772 { 00:18:40.772 "results": [ 00:18:40.772 { 00:18:40.772 "job": "NVMe0n1", 00:18:40.772 "core_mask": "0x4", 00:18:40.772 "workload": "randread", 00:18:40.772 "status": "finished", 00:18:40.772 "queue_depth": 128, 00:18:40.772 "io_size": 4096, 00:18:40.772 "runtime": 8.135003, 00:18:40.772 "iops": 2341.732387806126, 00:18:40.772 "mibps": 9.14739213986768, 00:18:40.772 "io_failed": 128, 00:18:40.772 "io_timeout": 0, 00:18:40.772 "avg_latency_us": 54382.51266545947, 00:18:40.772 "min_latency_us": 7060.014545454545, 00:18:40.772 "max_latency_us": 7046430.72 00:18:40.772 } 00:18:40.772 ], 00:18:40.772 "core_count": 1 00:18:40.772 } 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:40.772 Attaching 5 probes... 00:18:40.772 1334.545587: reset bdev controller NVMe0 00:18:40.772 1334.619988: reconnect bdev controller NVMe0 00:18:40.772 3334.863852: reconnect delay bdev controller NVMe0 00:18:40.772 3334.896698: reconnect bdev controller NVMe0 00:18:40.772 5336.496618: reconnect delay bdev controller NVMe0 00:18:40.772 5336.528317: reconnect bdev controller NVMe0 00:18:40.772 7336.787339: reconnect delay bdev controller NVMe0 00:18:40.772 7336.801020: reconnect bdev controller NVMe0 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81707 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81703 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81703 ']' 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81703 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:40.772 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81703 00:18:41.031 killing process with pid 81703 00:18:41.031 Received shutdown signal, test time was about 8.199405 seconds 00:18:41.031 00:18:41.031 Latency(us) 00:18:41.031 [2024-12-06T12:26:27.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.031 [2024-12-06T12:26:27.689Z] =================================================================================================================== 00:18:41.031 [2024-12-06T12:26:27.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81703' 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81703 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81703 00:18:41.031 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.290 12:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.549 rmmod nvme_tcp 00:18:41.549 rmmod nvme_fabrics 00:18:41.549 rmmod nvme_keyring 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81280 ']' 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81280 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81280 ']' 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81280 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81280 00:18:41.549 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.550 killing process with pid 81280 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81280' 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81280 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81280 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:41.550 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:18:41.808 00:18:41.808 real 0m45.136s 00:18:41.808 user 2m11.657s 00:18:41.808 sys 0m5.250s 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.808 12:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:41.808 ************************************ 00:18:41.808 END TEST nvmf_timeout 00:18:41.808 ************************************ 00:18:42.066 12:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:18:42.066 12:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:42.066 00:18:42.066 real 4m55.522s 00:18:42.066 user 12m53.120s 00:18:42.066 sys 1m4.911s 00:18:42.066 12:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.066 12:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.066 ************************************ 00:18:42.066 END TEST nvmf_host 00:18:42.066 ************************************ 00:18:42.066 12:26:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:18:42.066 12:26:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:18:42.066 ************************************ 00:18:42.066 END TEST nvmf_tcp 00:18:42.066 ************************************ 00:18:42.066 00:18:42.066 real 12m10.931s 00:18:42.066 user 29m21.350s 00:18:42.066 sys 3m1.355s 00:18:42.066 12:26:28 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.066 12:26:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.066 12:26:28 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:18:42.066 12:26:28 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:42.066 12:26:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:42.066 12:26:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.066 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:18:42.066 ************************************ 00:18:42.066 START TEST nvmf_dif 00:18:42.066 ************************************ 00:18:42.067 12:26:28 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:42.067 * Looking for test storage... 00:18:42.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:42.067 12:26:28 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:42.067 12:26:28 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:18:42.067 12:26:28 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:42.326 12:26:28 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:18:42.326 12:26:28 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.326 12:26:28 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.326 --rc genhtml_branch_coverage=1 00:18:42.326 --rc genhtml_function_coverage=1 00:18:42.326 --rc genhtml_legend=1 00:18:42.326 --rc geninfo_all_blocks=1 00:18:42.326 --rc geninfo_unexecuted_blocks=1 00:18:42.326 00:18:42.326 ' 00:18:42.326 12:26:28 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.326 --rc genhtml_branch_coverage=1 00:18:42.326 --rc genhtml_function_coverage=1 00:18:42.326 --rc genhtml_legend=1 00:18:42.326 --rc geninfo_all_blocks=1 00:18:42.326 --rc geninfo_unexecuted_blocks=1 00:18:42.326 00:18:42.326 ' 00:18:42.326 12:26:28 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.326 --rc genhtml_branch_coverage=1 00:18:42.326 --rc genhtml_function_coverage=1 00:18:42.326 --rc genhtml_legend=1 00:18:42.326 --rc geninfo_all_blocks=1 00:18:42.326 --rc geninfo_unexecuted_blocks=1 00:18:42.326 00:18:42.326 ' 00:18:42.326 12:26:28 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.326 --rc genhtml_branch_coverage=1 00:18:42.326 --rc genhtml_function_coverage=1 00:18:42.326 --rc genhtml_legend=1 00:18:42.326 --rc geninfo_all_blocks=1 00:18:42.326 --rc geninfo_unexecuted_blocks=1 00:18:42.326 00:18:42.326 ' 00:18:42.326 12:26:28 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.326 12:26:28 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.326 12:26:28 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.327 12:26:28 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.327 12:26:28 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.327 12:26:28 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.327 12:26:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.327 12:26:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.327 12:26:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.327 12:26:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:42.327 12:26:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.327 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.327 12:26:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:42.327 12:26:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:42.327 12:26:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:42.327 12:26:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:42.327 12:26:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.327 12:26:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:42.327 12:26:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:42.327 Cannot find device "nvmf_init_br" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:42.327 Cannot find device "nvmf_init_br2" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:42.327 Cannot find device "nvmf_tgt_br" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@164 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.327 Cannot find device "nvmf_tgt_br2" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@165 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:42.327 Cannot find device "nvmf_init_br" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@166 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:42.327 Cannot find device "nvmf_init_br2" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@167 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:42.327 Cannot find device "nvmf_tgt_br" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@168 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:42.327 Cannot find device "nvmf_tgt_br2" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@169 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:42.327 Cannot find device "nvmf_br" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@170 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:42.327 Cannot find device "nvmf_init_if" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@171 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:42.327 Cannot find device "nvmf_init_if2" 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@172 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@173 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@174 -- # true 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.327 12:26:28 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.600 12:26:28 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:42.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:18:42.601 00:18:42.601 --- 10.0.0.3 ping statistics --- 00:18:42.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.601 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:42.601 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:42.601 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:18:42.601 00:18:42.601 --- 10.0.0.4 ping statistics --- 00:18:42.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.601 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:42.601 00:18:42.601 --- 10.0.0.1 ping statistics --- 00:18:42.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.601 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:42.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:42.601 00:18:42.601 --- 10.0.0.2 ping statistics --- 00:18:42.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.601 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:18:42.601 12:26:29 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:42.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:43.118 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:43.118 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:43.118 12:26:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:43.118 12:26:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82240 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:43.118 12:26:29 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82240 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82240 ']' 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.118 12:26:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:43.118 [2024-12-06 12:26:29.652147] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:18:43.118 [2024-12-06 12:26:29.652256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.378 [2024-12-06 12:26:29.805240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.378 [2024-12-06 12:26:29.843044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.378 [2024-12-06 12:26:29.843112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.378 [2024-12-06 12:26:29.843127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.378 [2024-12-06 12:26:29.843137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.378 [2024-12-06 12:26:29.843146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.378 [2024-12-06 12:26:29.843541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.378 [2024-12-06 12:26:29.879125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:18:43.378 12:26:29 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 12:26:29 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.378 12:26:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:43.378 12:26:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 [2024-12-06 12:26:29.979203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.378 12:26:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.378 12:26:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 ************************************ 00:18:43.378 START TEST fio_dif_1_default 00:18:43.378 ************************************ 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.378 12:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 bdev_null0 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 [2024-12-06 12:26:30.023371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:43.378 { 00:18:43.378 "params": { 00:18:43.378 "name": "Nvme$subsystem", 00:18:43.378 "trtype": "$TEST_TRANSPORT", 00:18:43.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.378 "adrfam": "ipv4", 00:18:43.378 "trsvcid": "$NVMF_PORT", 00:18:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.378 "hdgst": ${hdgst:-false}, 00:18:43.378 "ddgst": ${ddgst:-false} 00:18:43.378 }, 00:18:43.378 "method": "bdev_nvme_attach_controller" 00:18:43.378 } 00:18:43.378 EOF 00:18:43.378 )") 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:18:43.378 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:18:43.636 12:26:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:43.636 "params": { 00:18:43.636 "name": "Nvme0", 00:18:43.636 "trtype": "tcp", 00:18:43.636 "traddr": "10.0.0.3", 00:18:43.636 "adrfam": "ipv4", 00:18:43.636 "trsvcid": "4420", 00:18:43.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:43.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:43.637 "hdgst": false, 00:18:43.637 "ddgst": false 00:18:43.637 }, 00:18:43.637 "method": "bdev_nvme_attach_controller" 00:18:43.637 }' 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.637 12:26:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.637 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:43.637 fio-3.35 00:18:43.637 Starting 1 thread 00:18:55.867 00:18:55.867 filename0: (groupid=0, jobs=1): err= 0: pid=82299: Fri Dec 6 12:26:40 2024 00:18:55.867 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(396MiB/10001msec) 00:18:55.867 slat (nsec): min=5909, max=49346, avg=7475.83, stdev=3026.08 00:18:55.867 clat (usec): min=312, max=3848, avg=372.32, stdev=49.08 00:18:55.867 lat (usec): min=318, max=3874, avg=379.80, stdev=49.68 00:18:55.867 clat percentiles (usec): 00:18:55.867 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:18:55.867 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:18:55.867 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 445], 00:18:55.867 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 635], 99.95th=[ 685], 00:18:55.867 | 99.99th=[ 979] 00:18:55.867 bw ( KiB/s): min=39072, max=41504, per=100.00%, avg=40570.95, stdev=717.79, samples=19 00:18:55.867 iops : min= 9768, max=10376, avg=10142.74, stdev=179.45, samples=19 00:18:55.867 lat (usec) : 500=98.75%, 750=1.22%, 1000=0.03% 00:18:55.867 lat (msec) : 4=0.01% 00:18:55.867 cpu : usr=84.99%, sys=13.13%, ctx=21, majf=0, minf=9 00:18:55.867 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.867 issued rwts: total=101408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.867 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:55.867 00:18:55.867 Run status group 0 (all jobs): 00:18:55.867 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=396MiB (415MB), run=10001-10001msec 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.867 00:18:55.867 real 0m10.928s 00:18:55.867 user 0m9.109s 00:18:55.867 sys 0m1.549s 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.867 ************************************ 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 END TEST fio_dif_1_default 00:18:55.867 ************************************ 00:18:55.867 12:26:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:55.867 12:26:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.867 12:26:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.867 12:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 ************************************ 00:18:55.867 START TEST fio_dif_1_multi_subsystems 00:18:55.867 ************************************ 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 bdev_null0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.867 12:26:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.867 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:55.867 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.867 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.867 [2024-12-06 12:26:41.005377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.868 bdev_null1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:55.868 { 00:18:55.868 "params": { 00:18:55.868 "name": "Nvme$subsystem", 00:18:55.868 "trtype": "$TEST_TRANSPORT", 00:18:55.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.868 "adrfam": "ipv4", 00:18:55.868 "trsvcid": "$NVMF_PORT", 00:18:55.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.868 "hdgst": ${hdgst:-false}, 00:18:55.868 "ddgst": ${ddgst:-false} 00:18:55.868 }, 00:18:55.868 "method": "bdev_nvme_attach_controller" 00:18:55.868 } 00:18:55.868 EOF 00:18:55.868 )") 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:55.868 { 00:18:55.868 "params": { 00:18:55.868 "name": "Nvme$subsystem", 00:18:55.868 "trtype": "$TEST_TRANSPORT", 00:18:55.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.868 "adrfam": "ipv4", 00:18:55.868 "trsvcid": "$NVMF_PORT", 00:18:55.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.868 "hdgst": ${hdgst:-false}, 00:18:55.868 "ddgst": ${ddgst:-false} 00:18:55.868 }, 00:18:55.868 "method": "bdev_nvme_attach_controller" 00:18:55.868 } 00:18:55.868 EOF 00:18:55.868 )") 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:55.868 "params": { 00:18:55.868 "name": "Nvme0", 00:18:55.868 "trtype": "tcp", 00:18:55.868 "traddr": "10.0.0.3", 00:18:55.868 "adrfam": "ipv4", 00:18:55.868 "trsvcid": "4420", 00:18:55.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:55.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:55.868 "hdgst": false, 00:18:55.868 "ddgst": false 00:18:55.868 }, 00:18:55.868 "method": "bdev_nvme_attach_controller" 00:18:55.868 },{ 00:18:55.868 "params": { 00:18:55.868 "name": "Nvme1", 00:18:55.868 "trtype": "tcp", 00:18:55.868 "traddr": "10.0.0.3", 00:18:55.868 "adrfam": "ipv4", 00:18:55.868 "trsvcid": "4420", 00:18:55.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.868 "hdgst": false, 00:18:55.868 "ddgst": false 00:18:55.868 }, 00:18:55.868 "method": "bdev_nvme_attach_controller" 00:18:55.868 }' 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:55.868 12:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:55.868 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:55.868 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:55.868 fio-3.35 00:18:55.868 Starting 2 threads 00:19:05.852 00:19:05.852 filename0: (groupid=0, jobs=1): err= 0: pid=82459: Fri Dec 6 12:26:51 2024 00:19:05.852 read: IOPS=5314, BW=20.8MiB/s (21.8MB/s)(208MiB/10001msec) 00:19:05.852 slat (nsec): min=6018, max=75116, avg=12578.74, stdev=4446.31 00:19:05.852 clat (usec): min=322, max=7433, avg=717.98, stdev=100.74 00:19:05.852 lat (usec): min=328, max=7460, avg=730.56, stdev=100.88 00:19:05.852 clat percentiles (usec): 00:19:05.852 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 676], 00:19:05.852 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:19:05.852 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 816], 00:19:05.852 | 99.00th=[ 906], 99.50th=[ 988], 99.90th=[ 1582], 99.95th=[ 1614], 00:19:05.852 | 99.99th=[ 4621] 00:19:05.852 bw ( KiB/s): min=18981, max=21856, per=50.03%, avg=21252.74, stdev=605.28, samples=19 00:19:05.852 iops : min= 4745, max= 5464, avg=5313.47, stdev=151.17, samples=19 00:19:05.852 lat (usec) : 500=0.02%, 750=80.51%, 1000=18.99% 00:19:05.852 lat (msec) : 2=0.47%, 10=0.02% 00:19:05.852 cpu : usr=89.94%, sys=8.64%, ctx=11, majf=0, minf=0 00:19:05.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.852 issued rwts: total=53148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:05.852 filename1: (groupid=0, jobs=1): err= 0: pid=82460: Fri Dec 6 12:26:51 2024 00:19:05.852 read: IOPS=5306, BW=20.7MiB/s (21.7MB/s)(207MiB/10001msec) 00:19:05.852 slat (nsec): min=5989, max=71449, avg=12324.32, stdev=4313.76 00:19:05.852 clat (usec): min=444, max=8746, avg=720.73, stdev=122.58 00:19:05.852 lat (usec): min=467, max=8762, avg=733.05, stdev=123.01 00:19:05.852 clat percentiles (usec): 00:19:05.852 | 1.00th=[ 611], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 676], 00:19:05.852 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 709], 60.00th=[ 717], 00:19:05.852 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 824], 00:19:05.852 | 99.00th=[ 906], 99.50th=[ 971], 99.90th=[ 2343], 99.95th=[ 2343], 00:19:05.852 | 99.99th=[ 2409] 00:19:05.852 bw ( KiB/s): min=18112, max=21856, per=49.96%, avg=21222.16, stdev=790.54, samples=19 00:19:05.852 iops : min= 4528, max= 5464, avg=5305.53, stdev=197.64, samples=19 00:19:05.852 lat (usec) : 500=0.03%, 750=78.53%, 1000=20.97% 00:19:05.852 lat (msec) : 2=0.27%, 4=0.20%, 10=0.01% 00:19:05.852 cpu : usr=89.91%, sys=8.79%, ctx=18, majf=0, minf=0 00:19:05.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.852 issued rwts: total=53068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:05.852 00:19:05.852 Run status group 0 (all jobs): 00:19:05.852 READ: bw=41.5MiB/s (43.5MB/s), 20.7MiB/s-20.8MiB/s (21.7MB/s-21.8MB/s), io=415MiB (435MB), run=10001-10001msec 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.852 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 00:19:05.853 real 0m11.029s 00:19:05.853 user 0m18.678s 00:19:05.853 sys 0m1.987s 00:19:05.853 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.853 ************************************ 00:19:05.853 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 END TEST fio_dif_1_multi_subsystems 00:19:05.853 ************************************ 00:19:05.853 12:26:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:05.853 12:26:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.853 12:26:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.853 12:26:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 ************************************ 00:19:05.853 START TEST fio_dif_rand_params 00:19:05.853 ************************************ 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 bdev_null0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:05.853 [2024-12-06 12:26:52.088268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.853 { 00:19:05.853 "params": { 00:19:05.853 "name": "Nvme$subsystem", 00:19:05.853 "trtype": "$TEST_TRANSPORT", 00:19:05.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.853 "adrfam": "ipv4", 00:19:05.853 "trsvcid": "$NVMF_PORT", 00:19:05.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.853 "hdgst": ${hdgst:-false}, 00:19:05.853 "ddgst": ${ddgst:-false} 00:19:05.853 }, 00:19:05.853 "method": "bdev_nvme_attach_controller" 00:19:05.853 } 00:19:05.853 EOF 00:19:05.853 )") 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:05.853 "params": { 00:19:05.853 "name": "Nvme0", 00:19:05.853 "trtype": "tcp", 00:19:05.853 "traddr": "10.0.0.3", 00:19:05.853 "adrfam": "ipv4", 00:19:05.853 "trsvcid": "4420", 00:19:05.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:05.853 "hdgst": false, 00:19:05.853 "ddgst": false 00:19:05.853 }, 00:19:05.853 "method": "bdev_nvme_attach_controller" 00:19:05.853 }' 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.853 12:26:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.853 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:05.853 ... 00:19:05.853 fio-3.35 00:19:05.853 Starting 3 threads 00:19:11.152 00:19:11.152 filename0: (groupid=0, jobs=1): err= 0: pid=82611: Fri Dec 6 12:26:57 2024 00:19:11.152 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(172MiB/5004msec) 00:19:11.152 slat (nsec): min=7087, max=45574, avg=13942.18, stdev=4317.37 00:19:11.152 clat (usec): min=8921, max=13087, avg=10871.33, stdev=372.27 00:19:11.152 lat (usec): min=8934, max=13100, avg=10885.28, stdev=372.97 00:19:11.152 clat percentiles (usec): 00:19:11.152 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:19:11.152 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:19:11.152 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:19:11.152 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:19:11.152 | 99.99th=[13042] 00:19:11.152 bw ( KiB/s): min=34560, max=35328, per=33.28%, avg=35157.33, stdev=338.66, samples=9 00:19:11.152 iops : min= 270, max= 276, avg=274.67, stdev= 2.65, samples=9 00:19:11.152 lat (msec) : 10=0.22%, 20=99.78% 00:19:11.152 cpu : usr=91.83%, sys=7.57%, ctx=11, majf=0, minf=0 00:19:11.152 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.152 issued rwts: total=1377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:11.152 filename0: (groupid=0, jobs=1): err= 0: pid=82612: Fri Dec 6 12:26:57 2024 00:19:11.152 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(172MiB/5005msec) 00:19:11.152 slat (nsec): min=6929, max=45781, avg=14161.59, stdev=4328.89 00:19:11.152 clat (usec): min=8914, max=13079, avg=10870.76, stdev=372.16 00:19:11.152 lat (usec): min=8926, max=13097, avg=10884.92, stdev=372.98 00:19:11.152 clat percentiles (usec): 00:19:11.152 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:19:11.152 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:19:11.152 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:19:11.152 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:19:11.152 | 99.99th=[13042] 00:19:11.152 bw ( KiB/s): min=34560, max=35328, per=33.28%, avg=35157.33, stdev=338.66, samples=9 00:19:11.152 iops : min= 270, max= 276, avg=274.67, stdev= 2.65, samples=9 00:19:11.152 lat (msec) : 10=0.22%, 20=99.78% 00:19:11.152 cpu : usr=91.17%, sys=8.19%, ctx=15, majf=0, minf=0 00:19:11.152 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.152 issued rwts: total=1377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:11.152 filename0: (groupid=0, jobs=1): err= 0: pid=82613: Fri Dec 6 12:26:57 2024 00:19:11.152 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(173MiB/5009msec) 00:19:11.152 slat (nsec): min=6602, max=36296, avg=9432.17, stdev=4017.47 00:19:11.152 clat (usec): min=4251, max=12841, avg=10865.83, stdev=482.55 00:19:11.152 lat (usec): min=4260, max=12853, avg=10875.26, stdev=482.59 00:19:11.152 clat percentiles (usec): 00:19:11.152 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:19:11.152 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:19:11.152 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:19:11.152 | 99.00th=[12256], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:19:11.152 | 99.99th=[12780] 00:19:11.152 bw ( KiB/s): min=34491, max=36096, per=33.36%, avg=35244.30, stdev=448.47, samples=10 00:19:11.152 iops : min= 269, max= 282, avg=275.30, stdev= 3.59, samples=10 00:19:11.152 lat (msec) : 10=0.22%, 20=99.78% 00:19:11.152 cpu : usr=90.95%, sys=8.49%, ctx=12, majf=0, minf=0 00:19:11.152 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.152 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:11.152 00:19:11.152 Run status group 0 (all jobs): 00:19:11.152 READ: bw=103MiB/s (108MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=517MiB (542MB), run=5004-5009msec 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 bdev_null0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 [2024-12-06 12:26:57.969163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 bdev_null1 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.411 12:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:11.412 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.412 12:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.412 bdev_null2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:11.412 { 00:19:11.412 "params": { 00:19:11.412 "name": "Nvme$subsystem", 00:19:11.412 "trtype": "$TEST_TRANSPORT", 00:19:11.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.412 "adrfam": "ipv4", 00:19:11.412 "trsvcid": "$NVMF_PORT", 00:19:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.412 "hdgst": ${hdgst:-false}, 00:19:11.412 "ddgst": ${ddgst:-false} 00:19:11.412 }, 00:19:11.412 "method": "bdev_nvme_attach_controller" 00:19:11.412 } 00:19:11.412 EOF 00:19:11.412 )") 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:11.412 { 00:19:11.412 "params": { 00:19:11.412 "name": "Nvme$subsystem", 00:19:11.412 "trtype": "$TEST_TRANSPORT", 00:19:11.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.412 "adrfam": "ipv4", 00:19:11.412 "trsvcid": "$NVMF_PORT", 00:19:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.412 "hdgst": ${hdgst:-false}, 00:19:11.412 "ddgst": ${ddgst:-false} 00:19:11.412 }, 00:19:11.412 "method": "bdev_nvme_attach_controller" 00:19:11.412 } 00:19:11.412 EOF 00:19:11.412 )") 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:11.412 { 00:19:11.412 "params": { 00:19:11.412 "name": "Nvme$subsystem", 00:19:11.412 "trtype": "$TEST_TRANSPORT", 00:19:11.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.412 "adrfam": "ipv4", 00:19:11.412 "trsvcid": "$NVMF_PORT", 00:19:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.412 "hdgst": ${hdgst:-false}, 00:19:11.412 "ddgst": ${ddgst:-false} 00:19:11.412 }, 00:19:11.412 "method": "bdev_nvme_attach_controller" 00:19:11.412 } 00:19:11.412 EOF 00:19:11.412 )") 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:11.412 12:26:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:11.412 "params": { 00:19:11.412 "name": "Nvme0", 00:19:11.412 "trtype": "tcp", 00:19:11.412 "traddr": "10.0.0.3", 00:19:11.412 "adrfam": "ipv4", 00:19:11.412 "trsvcid": "4420", 00:19:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:11.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:11.412 "hdgst": false, 00:19:11.412 "ddgst": false 00:19:11.412 }, 00:19:11.412 "method": "bdev_nvme_attach_controller" 00:19:11.412 },{ 00:19:11.412 "params": { 00:19:11.412 "name": "Nvme1", 00:19:11.412 "trtype": "tcp", 00:19:11.412 "traddr": "10.0.0.3", 00:19:11.412 "adrfam": "ipv4", 00:19:11.412 "trsvcid": "4420", 00:19:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.412 "hdgst": false, 00:19:11.412 "ddgst": false 00:19:11.412 }, 00:19:11.412 "method": "bdev_nvme_attach_controller" 00:19:11.412 },{ 00:19:11.412 "params": { 00:19:11.412 "name": "Nvme2", 00:19:11.412 "trtype": "tcp", 00:19:11.412 "traddr": "10.0.0.3", 00:19:11.412 "adrfam": "ipv4", 00:19:11.412 "trsvcid": "4420", 00:19:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:11.412 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:11.412 "hdgst": false, 00:19:11.412 "ddgst": false 00:19:11.412 }, 00:19:11.412 "method": "bdev_nvme_attach_controller" 00:19:11.412 }' 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:11.671 12:26:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.671 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:11.671 ... 00:19:11.671 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:11.671 ... 00:19:11.671 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:11.671 ... 00:19:11.671 fio-3.35 00:19:11.671 Starting 24 threads 00:19:23.914 00:19:23.914 filename0: (groupid=0, jobs=1): err= 0: pid=82710: Fri Dec 6 12:27:08 2024 00:19:23.914 read: IOPS=219, BW=877KiB/s (898kB/s)(8808KiB/10047msec) 00:19:23.914 slat (usec): min=5, max=8022, avg=19.30, stdev=190.84 00:19:23.914 clat (msec): min=17, max=155, avg=72.88, stdev=22.28 00:19:23.914 lat (msec): min=17, max=155, avg=72.90, stdev=22.29 00:19:23.914 clat percentiles (msec): 00:19:23.914 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 53], 00:19:23.914 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:19:23.914 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 117], 00:19:23.914 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 146], 99.95th=[ 155], 00:19:23.914 | 99.99th=[ 157] 00:19:23.914 bw ( KiB/s): min= 568, max= 1440, per=4.13%, avg=874.10, stdev=188.58, samples=20 00:19:23.914 iops : min= 142, max= 360, avg=218.50, stdev=47.13, samples=20 00:19:23.914 lat (msec) : 20=0.73%, 50=16.67%, 100=69.39%, 250=13.22% 00:19:23.914 cpu : usr=37.73%, sys=2.20%, ctx=1450, majf=0, minf=9 00:19:23.914 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:23.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.914 filename0: (groupid=0, jobs=1): err= 0: pid=82711: Fri Dec 6 12:27:08 2024 00:19:23.914 read: IOPS=229, BW=918KiB/s (940kB/s)(9180KiB/10004msec) 00:19:23.914 slat (usec): min=4, max=8032, avg=26.63, stdev=279.36 00:19:23.914 clat (msec): min=3, max=134, avg=69.63, stdev=21.61 00:19:23.914 lat (msec): min=3, max=134, avg=69.65, stdev=21.62 00:19:23.914 clat percentiles (msec): 00:19:23.914 | 1.00th=[ 7], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 50], 00:19:23.914 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:19:23.914 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 112], 00:19:23.914 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 136], 00:19:23.914 | 99.99th=[ 136] 00:19:23.914 bw ( KiB/s): min= 664, max= 1024, per=4.24%, avg=897.00, stdev=112.75, samples=19 00:19:23.914 iops : min= 166, max= 256, avg=224.21, stdev=28.21, samples=19 00:19:23.914 lat (msec) : 4=0.13%, 10=1.39%, 20=0.44%, 50=20.39%, 100=68.10% 00:19:23.914 lat (msec) : 250=9.54% 00:19:23.914 cpu : usr=38.66%, sys=2.17%, ctx=1152, majf=0, minf=9 00:19:23.914 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:23.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.914 filename0: (groupid=0, jobs=1): err= 0: pid=82712: Fri Dec 6 12:27:08 2024 00:19:23.914 read: IOPS=226, BW=905KiB/s (926kB/s)(9056KiB/10011msec) 00:19:23.914 slat (usec): min=3, max=8023, avg=19.72, stdev=188.29 00:19:23.914 clat (msec): min=10, max=130, avg=70.64, stdev=20.72 00:19:23.914 lat (msec): min=10, max=130, avg=70.66, stdev=20.72 00:19:23.914 clat percentiles (msec): 00:19:23.914 | 1.00th=[ 30], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 50], 00:19:23.914 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:19:23.914 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 109], 00:19:23.914 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 131], 00:19:23.914 | 99.99th=[ 131] 00:19:23.914 bw ( KiB/s): min= 664, max= 1136, per=4.23%, avg=895.16, stdev=128.94, samples=19 00:19:23.914 iops : min= 166, max= 284, avg=223.79, stdev=32.23, samples=19 00:19:23.914 lat (msec) : 20=0.57%, 50=22.08%, 100=67.31%, 250=10.03% 00:19:23.914 cpu : usr=34.80%, sys=1.94%, ctx=968, majf=0, minf=9 00:19:23.914 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:23.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.914 filename0: (groupid=0, jobs=1): err= 0: pid=82713: Fri Dec 6 12:27:08 2024 00:19:23.914 read: IOPS=229, BW=918KiB/s (940kB/s)(9196KiB/10019msec) 00:19:23.914 slat (usec): min=4, max=8027, avg=21.10, stdev=188.11 00:19:23.914 clat (msec): min=21, max=127, avg=69.61, stdev=20.53 00:19:23.914 lat (msec): min=21, max=127, avg=69.63, stdev=20.53 00:19:23.914 clat percentiles (msec): 00:19:23.914 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 49], 00:19:23.914 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 73], 00:19:23.914 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 112], 00:19:23.914 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 128], 99.95th=[ 129], 00:19:23.914 | 99.99th=[ 129] 00:19:23.914 bw ( KiB/s): min= 664, max= 1144, per=4.32%, avg=915.79, stdev=130.19, samples=19 00:19:23.914 iops : min= 166, max= 286, avg=228.95, stdev=32.55, samples=19 00:19:23.914 lat (msec) : 50=23.66%, 100=66.72%, 250=9.61% 00:19:23.914 cpu : usr=40.34%, sys=2.18%, ctx=1348, majf=0, minf=9 00:19:23.914 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:23.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.914 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.914 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.914 filename0: (groupid=0, jobs=1): err= 0: pid=82714: Fri Dec 6 12:27:08 2024 00:19:23.914 read: IOPS=208, BW=832KiB/s (852kB/s)(8360KiB/10043msec) 00:19:23.914 slat (usec): min=7, max=8025, avg=17.50, stdev=175.40 00:19:23.914 clat (msec): min=9, max=155, avg=76.63, stdev=23.82 00:19:23.914 lat (msec): min=9, max=156, avg=76.65, stdev=23.82 00:19:23.914 clat percentiles (msec): 00:19:23.914 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 59], 00:19:23.914 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:19:23.914 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:19:23.914 | 99.00th=[ 128], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 134], 00:19:23.914 | 99.99th=[ 157] 00:19:23.915 bw ( KiB/s): min= 592, max= 1552, per=3.93%, avg=831.70, stdev=204.33, samples=20 00:19:23.915 iops : min= 148, max= 388, avg=207.85, stdev=51.08, samples=20 00:19:23.915 lat (msec) : 10=0.48%, 20=2.58%, 50=10.33%, 100=70.19%, 250=16.41% 00:19:23.915 cpu : usr=37.31%, sys=2.07%, ctx=1048, majf=0, minf=9 00:19:23.915 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:23.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 complete : 0=0.0%, 4=89.8%, 8=8.3%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.915 filename0: (groupid=0, jobs=1): err= 0: pid=82715: Fri Dec 6 12:27:08 2024 00:19:23.915 read: IOPS=221, BW=886KiB/s (907kB/s)(8868KiB/10008msec) 00:19:23.915 slat (usec): min=4, max=8025, avg=28.22, stdev=255.39 00:19:23.915 clat (msec): min=11, max=128, avg=72.04, stdev=20.32 00:19:23.915 lat (msec): min=11, max=128, avg=72.07, stdev=20.32 00:19:23.915 clat percentiles (msec): 00:19:23.915 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:19:23.915 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:19:23.915 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 112], 00:19:23.915 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 129], 99.95th=[ 129], 00:19:23.915 | 99.99th=[ 129] 00:19:23.915 bw ( KiB/s): min= 664, max= 1024, per=4.15%, avg=878.74, stdev=117.64, samples=19 00:19:23.915 iops : min= 166, max= 256, avg=219.68, stdev=29.41, samples=19 00:19:23.915 lat (msec) : 20=0.27%, 50=16.46%, 100=71.67%, 250=11.59% 00:19:23.915 cpu : usr=42.98%, sys=2.36%, ctx=1471, majf=0, minf=9 00:19:23.915 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=79.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:23.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.915 filename0: (groupid=0, jobs=1): err= 0: pid=82716: Fri Dec 6 12:27:08 2024 00:19:23.915 read: IOPS=216, BW=868KiB/s (889kB/s)(8712KiB/10040msec) 00:19:23.915 slat (usec): min=4, max=8026, avg=22.59, stdev=242.76 00:19:23.915 clat (msec): min=9, max=146, avg=73.53, stdev=22.53 00:19:23.915 lat (msec): min=9, max=146, avg=73.55, stdev=22.53 00:19:23.915 clat percentiles (msec): 00:19:23.915 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:19:23.915 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:19:23.915 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 118], 00:19:23.915 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 136], 00:19:23.915 | 99.99th=[ 146] 00:19:23.915 bw ( KiB/s): min= 640, max= 1536, per=4.09%, avg=866.90, stdev=190.82, samples=20 00:19:23.915 iops : min= 160, max= 384, avg=216.65, stdev=47.72, samples=20 00:19:23.915 lat (msec) : 10=0.51%, 20=1.70%, 50=14.69%, 100=71.76%, 250=11.34% 00:19:23.915 cpu : usr=31.57%, sys=1.78%, ctx=881, majf=0, minf=9 00:19:23.915 IO depths : 1=0.1%, 2=0.7%, 4=3.2%, 8=79.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:23.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.915 filename0: (groupid=0, jobs=1): err= 0: pid=82717: Fri Dec 6 12:27:08 2024 00:19:23.915 read: IOPS=227, BW=910KiB/s (931kB/s)(9116KiB/10022msec) 00:19:23.915 slat (usec): min=8, max=8037, avg=35.42, stdev=362.34 00:19:23.915 clat (msec): min=30, max=130, avg=70.14, stdev=20.16 00:19:23.915 lat (msec): min=30, max=130, avg=70.17, stdev=20.14 00:19:23.915 clat percentiles (msec): 00:19:23.915 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:19:23.915 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:19:23.915 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 102], 95.00th=[ 111], 00:19:23.915 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 131], 00:19:23.915 | 99.99th=[ 131] 00:19:23.915 bw ( KiB/s): min= 640, max= 1184, per=4.29%, avg=908.00, stdev=128.41, samples=20 00:19:23.915 iops : min= 160, max= 296, avg=227.00, stdev=32.10, samples=20 00:19:23.915 lat (msec) : 50=21.15%, 100=68.85%, 250=10.00% 00:19:23.915 cpu : usr=38.84%, sys=2.17%, ctx=1222, majf=0, minf=9 00:19:23.915 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:23.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.915 filename1: (groupid=0, jobs=1): err= 0: pid=82718: Fri Dec 6 12:27:08 2024 00:19:23.915 read: IOPS=219, BW=878KiB/s (899kB/s)(8820KiB/10042msec) 00:19:23.915 slat (usec): min=6, max=4028, avg=16.95, stdev=120.84 00:19:23.915 clat (msec): min=9, max=155, avg=72.68, stdev=23.05 00:19:23.915 lat (msec): min=9, max=155, avg=72.70, stdev=23.04 00:19:23.915 clat percentiles (msec): 00:19:23.915 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 52], 00:19:23.915 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:23.915 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 117], 00:19:23.915 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 144], 00:19:23.915 | 99.99th=[ 157] 00:19:23.915 bw ( KiB/s): min= 584, max= 1600, per=4.14%, avg=877.25, stdev=211.53, samples=20 00:19:23.915 iops : min= 146, max= 400, avg=219.25, stdev=52.87, samples=20 00:19:23.915 lat (msec) : 10=0.63%, 20=1.45%, 50=15.65%, 100=69.89%, 250=12.38% 00:19:23.915 cpu : usr=33.18%, sys=1.73%, ctx=952, majf=0, minf=9 00:19:23.915 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:23.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.915 filename1: (groupid=0, jobs=1): err= 0: pid=82719: Fri Dec 6 12:27:08 2024 00:19:23.915 read: IOPS=209, BW=839KiB/s (859kB/s)(8420KiB/10040msec) 00:19:23.915 slat (usec): min=8, max=5025, avg=26.31, stdev=223.77 00:19:23.915 clat (msec): min=15, max=152, avg=76.14, stdev=22.93 00:19:23.915 lat (msec): min=15, max=152, avg=76.16, stdev=22.93 00:19:23.915 clat percentiles (msec): 00:19:23.915 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:19:23.915 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:19:23.915 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 116], 00:19:23.915 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 144], 00:19:23.915 | 99.99th=[ 153] 00:19:23.915 bw ( KiB/s): min= 608, max= 1399, per=3.95%, avg=835.15, stdev=183.95, samples=20 00:19:23.915 iops : min= 152, max= 349, avg=208.75, stdev=45.87, samples=20 00:19:23.915 lat (msec) : 20=1.52%, 50=10.83%, 100=73.44%, 250=14.20% 00:19:23.915 cpu : usr=41.01%, sys=2.33%, ctx=1307, majf=0, minf=9 00:19:23.915 IO depths : 1=0.1%, 2=1.7%, 4=6.7%, 8=76.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:23.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 complete : 0=0.0%, 4=89.1%, 8=9.4%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.915 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.915 filename1: (groupid=0, jobs=1): err= 0: pid=82720: Fri Dec 6 12:27:08 2024 00:19:23.915 read: IOPS=211, BW=846KiB/s (866kB/s)(8480KiB/10023msec) 00:19:23.916 slat (usec): min=4, max=4074, avg=24.01, stdev=186.35 00:19:23.916 clat (msec): min=32, max=144, avg=75.44, stdev=20.71 00:19:23.916 lat (msec): min=32, max=144, avg=75.46, stdev=20.72 00:19:23.916 clat percentiles (msec): 00:19:23.916 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 54], 00:19:23.916 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 80], 00:19:23.916 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 116], 00:19:23.916 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 138], 99.95th=[ 138], 00:19:23.916 | 99.99th=[ 144] 00:19:23.916 bw ( KiB/s): min= 656, max= 1138, per=3.99%, avg=844.10, stdev=141.87, samples=20 00:19:23.916 iops : min= 164, max= 284, avg=211.00, stdev=35.41, samples=20 00:19:23.916 lat (msec) : 50=13.35%, 100=74.86%, 250=11.79% 00:19:23.916 cpu : usr=43.95%, sys=2.54%, ctx=1519, majf=0, minf=9 00:19:23.916 IO depths : 1=0.1%, 2=2.2%, 4=8.4%, 8=74.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:19:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 complete : 0=0.0%, 4=89.3%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.916 filename1: (groupid=0, jobs=1): err= 0: pid=82721: Fri Dec 6 12:27:08 2024 00:19:23.916 read: IOPS=215, BW=860KiB/s (881kB/s)(8624KiB/10023msec) 00:19:23.916 slat (usec): min=3, max=8037, avg=33.15, stdev=385.51 00:19:23.916 clat (msec): min=24, max=144, avg=74.19, stdev=20.19 00:19:23.916 lat (msec): min=24, max=144, avg=74.22, stdev=20.20 00:19:23.916 clat percentiles (msec): 00:19:23.916 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 56], 00:19:23.916 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:19:23.916 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 110], 00:19:23.916 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:19:23.916 | 99.99th=[ 144] 00:19:23.916 bw ( KiB/s): min= 616, max= 1154, per=4.05%, avg=858.90, stdev=142.34, samples=20 00:19:23.916 iops : min= 154, max= 288, avg=214.70, stdev=35.53, samples=20 00:19:23.916 lat (msec) : 50=16.84%, 100=71.66%, 250=11.50% 00:19:23.916 cpu : usr=31.34%, sys=1.93%, ctx=872, majf=0, minf=9 00:19:23.916 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=78.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.916 filename1: (groupid=0, jobs=1): err= 0: pid=82722: Fri Dec 6 12:27:08 2024 00:19:23.916 read: IOPS=219, BW=880KiB/s (901kB/s)(8832KiB/10041msec) 00:19:23.916 slat (usec): min=8, max=11024, avg=22.59, stdev=289.74 00:19:23.916 clat (msec): min=9, max=156, avg=72.54, stdev=23.34 00:19:23.916 lat (msec): min=9, max=156, avg=72.56, stdev=23.33 00:19:23.916 clat percentiles (msec): 00:19:23.916 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 53], 00:19:23.916 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:23.916 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 120], 00:19:23.916 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:19:23.916 | 99.99th=[ 157] 00:19:23.916 bw ( KiB/s): min= 560, max= 1536, per=4.15%, avg=878.50, stdev=197.45, samples=20 00:19:23.916 iops : min= 140, max= 384, avg=219.55, stdev=49.34, samples=20 00:19:23.916 lat (msec) : 10=0.09%, 20=1.36%, 50=17.26%, 100=68.61%, 250=12.68% 00:19:23.916 cpu : usr=32.73%, sys=2.00%, ctx=973, majf=0, minf=9 00:19:23.916 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.916 filename1: (groupid=0, jobs=1): err= 0: pid=82723: Fri Dec 6 12:27:08 2024 00:19:23.916 read: IOPS=224, BW=897KiB/s (918kB/s)(9008KiB/10043msec) 00:19:23.916 slat (usec): min=6, max=4026, avg=22.09, stdev=154.78 00:19:23.916 clat (msec): min=15, max=157, avg=71.21, stdev=22.18 00:19:23.916 lat (msec): min=15, max=157, avg=71.23, stdev=22.19 00:19:23.916 clat percentiles (msec): 00:19:23.916 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 53], 00:19:23.916 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:19:23.916 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 104], 95.00th=[ 114], 00:19:23.916 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 150], 99.95th=[ 150], 00:19:23.916 | 99.99th=[ 157] 00:19:23.916 bw ( KiB/s): min= 608, max= 1536, per=4.22%, avg=894.35, stdev=188.34, samples=20 00:19:23.916 iops : min= 152, max= 384, avg=223.55, stdev=47.09, samples=20 00:19:23.916 lat (msec) : 20=2.13%, 50=14.92%, 100=71.18%, 250=11.77% 00:19:23.916 cpu : usr=43.04%, sys=2.58%, ctx=1431, majf=0, minf=9 00:19:23.916 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.916 filename1: (groupid=0, jobs=1): err= 0: pid=82724: Fri Dec 6 12:27:08 2024 00:19:23.916 read: IOPS=215, BW=860KiB/s (881kB/s)(8644KiB/10046msec) 00:19:23.916 slat (usec): min=3, max=8023, avg=21.96, stdev=243.61 00:19:23.916 clat (msec): min=22, max=137, avg=74.25, stdev=21.17 00:19:23.916 lat (msec): min=22, max=137, avg=74.27, stdev=21.17 00:19:23.916 clat percentiles (msec): 00:19:23.916 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:19:23.916 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:19:23.916 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 112], 00:19:23.916 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:19:23.916 | 99.99th=[ 138] 00:19:23.916 bw ( KiB/s): min= 640, max= 1405, per=4.05%, avg=857.65, stdev=167.60, samples=20 00:19:23.916 iops : min= 160, max= 351, avg=214.35, stdev=41.86, samples=20 00:19:23.916 lat (msec) : 50=16.01%, 100=71.31%, 250=12.68% 00:19:23.916 cpu : usr=33.38%, sys=1.87%, ctx=994, majf=0, minf=9 00:19:23.916 IO depths : 1=0.1%, 2=1.1%, 4=4.0%, 8=78.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.916 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.916 filename1: (groupid=0, jobs=1): err= 0: pid=82725: Fri Dec 6 12:27:08 2024 00:19:23.916 read: IOPS=218, BW=875KiB/s (896kB/s)(8784KiB/10040msec) 00:19:23.916 slat (usec): min=6, max=8026, avg=31.88, stdev=362.26 00:19:23.916 clat (msec): min=9, max=146, avg=72.90, stdev=22.94 00:19:23.916 lat (msec): min=9, max=146, avg=72.93, stdev=22.93 00:19:23.916 clat percentiles (msec): 00:19:23.916 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 52], 00:19:23.916 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:19:23.917 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 117], 00:19:23.917 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:23.917 | 99.99th=[ 146] 00:19:23.917 bw ( KiB/s): min= 584, max= 1520, per=4.13%, avg=874.15, stdev=188.24, samples=20 00:19:23.917 iops : min= 146, max= 380, avg=218.45, stdev=47.05, samples=20 00:19:23.917 lat (msec) : 10=0.27%, 20=3.28%, 50=15.07%, 100=69.44%, 250=11.93% 00:19:23.917 cpu : usr=34.86%, sys=1.93%, ctx=964, majf=0, minf=9 00:19:23.917 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:23.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.917 filename2: (groupid=0, jobs=1): err= 0: pid=82726: Fri Dec 6 12:27:08 2024 00:19:23.917 read: IOPS=215, BW=863KiB/s (884kB/s)(8664KiB/10034msec) 00:19:23.917 slat (usec): min=7, max=8026, avg=34.15, stdev=387.27 00:19:23.917 clat (msec): min=15, max=156, avg=73.96, stdev=21.44 00:19:23.917 lat (msec): min=15, max=156, avg=74.00, stdev=21.44 00:19:23.917 clat percentiles (msec): 00:19:23.917 | 1.00th=[ 24], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:19:23.917 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 80], 00:19:23.917 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 116], 00:19:23.917 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 155], 00:19:23.917 | 99.99th=[ 157] 00:19:23.917 bw ( KiB/s): min= 616, max= 1408, per=4.06%, avg=859.65, stdev=171.22, samples=20 00:19:23.917 iops : min= 154, max= 352, avg=214.90, stdev=42.80, samples=20 00:19:23.917 lat (msec) : 20=0.09%, 50=16.90%, 100=70.87%, 250=12.14% 00:19:23.917 cpu : usr=36.75%, sys=1.88%, ctx=1009, majf=0, minf=9 00:19:23.917 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:23.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.917 filename2: (groupid=0, jobs=1): err= 0: pid=82727: Fri Dec 6 12:27:08 2024 00:19:23.917 read: IOPS=232, BW=928KiB/s (950kB/s)(9284KiB/10004msec) 00:19:23.917 slat (usec): min=4, max=7899, avg=26.24, stdev=282.31 00:19:23.917 clat (msec): min=6, max=136, avg=68.85, stdev=21.29 00:19:23.917 lat (msec): min=6, max=136, avg=68.88, stdev=21.29 00:19:23.917 clat percentiles (msec): 00:19:23.917 | 1.00th=[ 18], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:19:23.917 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 73], 00:19:23.917 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 111], 00:19:23.917 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:19:23.917 | 99.99th=[ 138] 00:19:23.917 bw ( KiB/s): min= 664, max= 1032, per=4.32%, avg=914.11, stdev=117.72, samples=19 00:19:23.917 iops : min= 166, max= 258, avg=228.53, stdev=29.43, samples=19 00:19:23.917 lat (msec) : 10=0.56%, 20=0.52%, 50=22.79%, 100=66.82%, 250=9.31% 00:19:23.917 cpu : usr=38.86%, sys=2.08%, ctx=1260, majf=0, minf=9 00:19:23.917 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:23.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 issued rwts: total=2321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.917 filename2: (groupid=0, jobs=1): err= 0: pid=82728: Fri Dec 6 12:27:08 2024 00:19:23.917 read: IOPS=231, BW=925KiB/s (947kB/s)(9256KiB/10006msec) 00:19:23.917 slat (usec): min=4, max=8029, avg=35.90, stdev=352.95 00:19:23.917 clat (msec): min=5, max=140, avg=69.03, stdev=21.67 00:19:23.917 lat (msec): min=5, max=140, avg=69.06, stdev=21.67 00:19:23.917 clat percentiles (msec): 00:19:23.917 | 1.00th=[ 12], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 48], 00:19:23.917 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:19:23.917 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 111], 00:19:23.917 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 132], 00:19:23.917 | 99.99th=[ 140] 00:19:23.917 bw ( KiB/s): min= 616, max= 1066, per=4.30%, avg=910.00, stdev=126.23, samples=19 00:19:23.917 iops : min= 154, max= 266, avg=227.47, stdev=31.52, samples=19 00:19:23.917 lat (msec) : 10=0.95%, 20=0.43%, 50=23.16%, 100=65.60%, 250=9.85% 00:19:23.917 cpu : usr=35.32%, sys=1.85%, ctx=1017, majf=0, minf=10 00:19:23.917 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:23.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 issued rwts: total=2314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.917 filename2: (groupid=0, jobs=1): err= 0: pid=82729: Fri Dec 6 12:27:08 2024 00:19:23.917 read: IOPS=220, BW=883KiB/s (904kB/s)(8856KiB/10029msec) 00:19:23.917 slat (usec): min=4, max=8024, avg=24.09, stdev=255.35 00:19:23.917 clat (msec): min=16, max=155, avg=72.29, stdev=21.31 00:19:23.917 lat (msec): min=16, max=155, avg=72.31, stdev=21.31 00:19:23.917 clat percentiles (msec): 00:19:23.917 | 1.00th=[ 26], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 52], 00:19:23.917 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:19:23.917 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 114], 00:19:23.917 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 140], 00:19:23.917 | 99.99th=[ 157] 00:19:23.917 bw ( KiB/s): min= 616, max= 1280, per=4.15%, avg=879.25, stdev=144.42, samples=20 00:19:23.917 iops : min= 154, max= 320, avg=219.80, stdev=36.10, samples=20 00:19:23.917 lat (msec) : 20=0.72%, 50=16.17%, 100=71.18%, 250=11.92% 00:19:23.917 cpu : usr=43.93%, sys=2.59%, ctx=1341, majf=0, minf=9 00:19:23.917 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:23.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.917 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.917 filename2: (groupid=0, jobs=1): err= 0: pid=82730: Fri Dec 6 12:27:08 2024 00:19:23.917 read: IOPS=237, BW=950KiB/s (973kB/s)(9500KiB/10002msec) 00:19:23.917 slat (usec): min=3, max=12031, avg=28.52, stdev=385.28 00:19:23.917 clat (usec): min=1171, max=132039, avg=67254.18, stdev=24305.24 00:19:23.917 lat (usec): min=1193, max=132049, avg=67282.70, stdev=24310.21 00:19:23.917 clat percentiles (usec): 00:19:23.917 | 1.00th=[ 1565], 5.00th=[ 33817], 10.00th=[ 46400], 20.00th=[ 47973], 00:19:23.917 | 30.00th=[ 50594], 40.00th=[ 60031], 50.00th=[ 70779], 60.00th=[ 71828], 00:19:23.917 | 70.00th=[ 78119], 80.00th=[ 83362], 90.00th=[ 95945], 95.00th=[108528], 00:19:23.917 | 99.00th=[120062], 99.50th=[121111], 99.90th=[131597], 99.95th=[131597], 00:19:23.917 | 99.99th=[131597] 00:19:23.917 bw ( KiB/s): min= 664, max= 1048, per=4.29%, avg=908.68, stdev=115.64, samples=19 00:19:23.917 iops : min= 166, max= 262, avg=227.16, stdev=28.91, samples=19 00:19:23.917 lat (msec) : 2=2.11%, 4=0.42%, 10=1.35%, 20=0.42%, 50=25.09% 00:19:23.917 lat (msec) : 100=61.35%, 250=9.26% 00:19:23.917 cpu : usr=31.65%, sys=1.76%, ctx=904, majf=0, minf=9 00:19:23.918 IO depths : 1=0.1%, 2=0.5%, 4=1.5%, 8=82.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:23.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.918 filename2: (groupid=0, jobs=1): err= 0: pid=82731: Fri Dec 6 12:27:08 2024 00:19:23.918 read: IOPS=218, BW=873KiB/s (894kB/s)(8748KiB/10024msec) 00:19:23.918 slat (usec): min=4, max=8025, avg=22.31, stdev=243.22 00:19:23.918 clat (msec): min=24, max=156, avg=73.15, stdev=21.60 00:19:23.918 lat (msec): min=24, max=156, avg=73.17, stdev=21.61 00:19:23.918 clat percentiles (msec): 00:19:23.918 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:19:23.918 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:23.918 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 120], 00:19:23.918 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:19:23.918 | 99.99th=[ 157] 00:19:23.918 bw ( KiB/s): min= 584, max= 1280, per=4.12%, avg=871.20, stdev=154.75, samples=20 00:19:23.918 iops : min= 146, max= 320, avg=217.80, stdev=38.69, samples=20 00:19:23.918 lat (msec) : 50=19.02%, 100=68.40%, 250=12.57% 00:19:23.918 cpu : usr=35.35%, sys=2.04%, ctx=1163, majf=0, minf=9 00:19:23.918 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:23.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 issued rwts: total=2187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.918 filename2: (groupid=0, jobs=1): err= 0: pid=82732: Fri Dec 6 12:27:08 2024 00:19:23.918 read: IOPS=213, BW=856KiB/s (877kB/s)(8576KiB/10019msec) 00:19:23.918 slat (usec): min=8, max=8031, avg=35.92, stdev=396.19 00:19:23.918 clat (msec): min=29, max=132, avg=74.57, stdev=20.59 00:19:23.918 lat (msec): min=29, max=132, avg=74.60, stdev=20.58 00:19:23.918 clat percentiles (msec): 00:19:23.918 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:19:23.918 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:19:23.918 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 107], 95.00th=[ 114], 00:19:23.918 | 99.00th=[ 127], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 132], 00:19:23.918 | 99.99th=[ 132] 00:19:23.918 bw ( KiB/s): min= 616, max= 1024, per=4.03%, avg=854.00, stdev=118.14, samples=20 00:19:23.918 iops : min= 154, max= 256, avg=213.50, stdev=29.54, samples=20 00:19:23.918 lat (msec) : 50=16.84%, 100=71.50%, 250=11.66% 00:19:23.918 cpu : usr=33.16%, sys=1.95%, ctx=1016, majf=0, minf=9 00:19:23.918 IO depths : 1=0.1%, 2=1.4%, 4=5.9%, 8=77.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:23.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.918 filename2: (groupid=0, jobs=1): err= 0: pid=82733: Fri Dec 6 12:27:08 2024 00:19:23.918 read: IOPS=221, BW=885KiB/s (906kB/s)(8856KiB/10005msec) 00:19:23.918 slat (usec): min=4, max=8026, avg=20.33, stdev=190.63 00:19:23.918 clat (msec): min=3, max=133, avg=72.20, stdev=21.97 00:19:23.918 lat (msec): min=3, max=133, avg=72.22, stdev=21.97 00:19:23.918 clat percentiles (msec): 00:19:23.918 | 1.00th=[ 7], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 53], 00:19:23.918 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:19:23.918 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 112], 00:19:23.918 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 134], 00:19:23.918 | 99.99th=[ 134] 00:19:23.918 bw ( KiB/s): min= 656, max= 1024, per=4.09%, avg=865.68, stdev=128.79, samples=19 00:19:23.918 iops : min= 164, max= 256, avg=216.42, stdev=32.20, samples=19 00:19:23.918 lat (msec) : 4=0.14%, 10=1.45%, 20=0.14%, 50=15.94%, 100=71.18% 00:19:23.918 lat (msec) : 250=11.16% 00:19:23.918 cpu : usr=37.07%, sys=2.10%, ctx=1083, majf=0, minf=9 00:19:23.918 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=77.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:23.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.918 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:23.918 00:19:23.918 Run status group 0 (all jobs): 00:19:23.918 READ: bw=20.7MiB/s (21.7MB/s), 832KiB/s-950KiB/s (852kB/s-973kB/s), io=208MiB (218MB), run=10002-10047msec 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.918 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 bdev_null0 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 [2024-12-06 12:27:09.178606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 bdev_null1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:23.919 { 00:19:23.919 "params": { 00:19:23.919 "name": "Nvme$subsystem", 00:19:23.919 "trtype": "$TEST_TRANSPORT", 00:19:23.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.919 "adrfam": "ipv4", 00:19:23.919 "trsvcid": "$NVMF_PORT", 00:19:23.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.919 "hdgst": ${hdgst:-false}, 00:19:23.919 "ddgst": ${ddgst:-false} 00:19:23.919 }, 00:19:23.919 "method": "bdev_nvme_attach_controller" 00:19:23.919 } 00:19:23.919 EOF 00:19:23.919 )") 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:23.919 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:23.919 { 00:19:23.919 "params": { 00:19:23.919 "name": "Nvme$subsystem", 00:19:23.919 "trtype": "$TEST_TRANSPORT", 00:19:23.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.920 "adrfam": "ipv4", 00:19:23.920 "trsvcid": "$NVMF_PORT", 00:19:23.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.920 "hdgst": ${hdgst:-false}, 00:19:23.920 "ddgst": ${ddgst:-false} 00:19:23.920 }, 00:19:23.920 "method": "bdev_nvme_attach_controller" 00:19:23.920 } 00:19:23.920 EOF 00:19:23.920 )") 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:23.920 "params": { 00:19:23.920 "name": "Nvme0", 00:19:23.920 "trtype": "tcp", 00:19:23.920 "traddr": "10.0.0.3", 00:19:23.920 "adrfam": "ipv4", 00:19:23.920 "trsvcid": "4420", 00:19:23.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:23.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:23.920 "hdgst": false, 00:19:23.920 "ddgst": false 00:19:23.920 }, 00:19:23.920 "method": "bdev_nvme_attach_controller" 00:19:23.920 },{ 00:19:23.920 "params": { 00:19:23.920 "name": "Nvme1", 00:19:23.920 "trtype": "tcp", 00:19:23.920 "traddr": "10.0.0.3", 00:19:23.920 "adrfam": "ipv4", 00:19:23.920 "trsvcid": "4420", 00:19:23.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.920 "hdgst": false, 00:19:23.920 "ddgst": false 00:19:23.920 }, 00:19:23.920 "method": "bdev_nvme_attach_controller" 00:19:23.920 }' 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.920 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.920 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:23.920 ... 00:19:23.920 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:23.920 ... 00:19:23.920 fio-3.35 00:19:23.920 Starting 4 threads 00:19:29.194 00:19:29.194 filename0: (groupid=0, jobs=1): err= 0: pid=82874: Fri Dec 6 12:27:14 2024 00:19:29.194 read: IOPS=2710, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:19:29.194 slat (usec): min=6, max=123, avg= 9.78, stdev= 4.17 00:19:29.194 clat (usec): min=798, max=4775, avg=2927.29, stdev=1061.43 00:19:29.194 lat (usec): min=806, max=4783, avg=2937.08, stdev=1061.11 00:19:29.194 clat percentiles (usec): 00:19:29.194 | 1.00th=[ 1237], 5.00th=[ 1270], 10.00th=[ 1303], 20.00th=[ 1385], 00:19:29.194 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 3130], 60.00th=[ 3687], 00:19:29.194 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 3982], 95.00th=[ 4080], 00:19:29.194 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4621], 99.95th=[ 4686], 00:19:29.194 | 99.99th=[ 4752] 00:19:29.194 bw ( KiB/s): min=21472, max=22064, per=31.46%, avg=21696.00, stdev=193.99, samples=9 00:19:29.194 iops : min= 2684, max= 2758, avg=2712.00, stdev=24.25, samples=9 00:19:29.194 lat (usec) : 1000=0.04% 00:19:29.194 lat (msec) : 2=27.34%, 4=64.13%, 10=8.48% 00:19:29.194 cpu : usr=91.16%, sys=7.74%, ctx=5, majf=0, minf=0 00:19:29.194 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.194 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.194 issued rwts: total=13555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.194 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:29.194 filename0: (groupid=0, jobs=1): err= 0: pid=82875: Fri Dec 6 12:27:14 2024 00:19:29.194 read: IOPS=1970, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5003msec) 00:19:29.194 slat (nsec): min=3463, max=64042, avg=15236.49, stdev=5089.19 00:19:29.194 clat (usec): min=2756, max=5320, avg=4000.98, stdev=159.78 00:19:29.194 lat (usec): min=2768, max=5333, avg=4016.21, stdev=160.01 00:19:29.194 clat percentiles (usec): 00:19:29.194 | 1.00th=[ 3752], 5.00th=[ 3818], 10.00th=[ 3851], 20.00th=[ 3884], 00:19:29.195 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 3982], 00:19:29.195 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:19:29.195 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4817], 99.95th=[ 5145], 00:19:29.195 | 99.99th=[ 5342] 00:19:29.195 bw ( KiB/s): min=15616, max=15872, per=22.85%, avg=15758.22, stdev=112.70, samples=9 00:19:29.195 iops : min= 1952, max= 1984, avg=1969.78, stdev=14.09, samples=9 00:19:29.195 lat (msec) : 4=61.81%, 10=38.19% 00:19:29.195 cpu : usr=91.78%, sys=7.38%, ctx=5, majf=0, minf=0 00:19:29.195 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.195 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.195 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:29.195 filename1: (groupid=0, jobs=1): err= 0: pid=82876: Fri Dec 6 12:27:14 2024 00:19:29.195 read: IOPS=1970, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5002msec) 00:19:29.195 slat (nsec): min=7200, max=67851, avg=15419.68, stdev=5415.32 00:19:29.195 clat (usec): min=2755, max=4827, avg=3998.93, stdev=156.19 00:19:29.195 lat (usec): min=2768, max=4841, avg=4014.35, stdev=156.56 00:19:29.195 clat percentiles (usec): 00:19:29.195 | 1.00th=[ 3752], 5.00th=[ 3818], 10.00th=[ 3851], 20.00th=[ 3884], 00:19:29.195 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 3982], 00:19:29.195 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:19:29.195 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4686], 99.95th=[ 4752], 00:19:29.195 | 99.99th=[ 4817] 00:19:29.195 bw ( KiB/s): min=15616, max=15903, per=22.86%, avg=15761.67, stdev=117.00, samples=9 00:19:29.195 iops : min= 1952, max= 1987, avg=1970.11, stdev=14.50, samples=9 00:19:29.195 lat (msec) : 4=62.27%, 10=37.73% 00:19:29.195 cpu : usr=91.16%, sys=8.00%, ctx=10, majf=0, minf=0 00:19:29.195 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.195 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.195 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:29.195 filename1: (groupid=0, jobs=1): err= 0: pid=82877: Fri Dec 6 12:27:14 2024 00:19:29.195 read: IOPS=1970, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5002msec) 00:19:29.195 slat (nsec): min=7118, max=63371, avg=15011.94, stdev=5207.60 00:19:29.195 clat (usec): min=2737, max=4834, avg=4001.93, stdev=157.24 00:19:29.195 lat (usec): min=2749, max=4849, avg=4016.94, stdev=157.43 00:19:29.195 clat percentiles (usec): 00:19:29.195 | 1.00th=[ 3752], 5.00th=[ 3818], 10.00th=[ 3851], 20.00th=[ 3884], 00:19:29.195 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 3982], 00:19:29.195 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:19:29.195 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4752], 99.95th=[ 4752], 00:19:29.195 | 99.99th=[ 4817] 00:19:29.195 bw ( KiB/s): min=15616, max=15903, per=22.86%, avg=15761.67, stdev=117.00, samples=9 00:19:29.195 iops : min= 1952, max= 1987, avg=1970.11, stdev=14.50, samples=9 00:19:29.195 lat (msec) : 4=61.16%, 10=38.84% 00:19:29.195 cpu : usr=92.20%, sys=6.96%, ctx=7, majf=0, minf=1 00:19:29.195 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.195 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.195 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.195 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:29.195 00:19:29.195 Run status group 0 (all jobs): 00:19:29.195 READ: bw=67.3MiB/s (70.6MB/s), 15.4MiB/s-21.2MiB/s (16.1MB/s-22.2MB/s), io=337MiB (353MB), run=5001-5003msec 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.195 00:19:29.195 real 0m23.090s 00:19:29.195 user 2m3.135s 00:19:29.195 sys 0m8.449s 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.195 ************************************ 00:19:29.195 12:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 END TEST fio_dif_rand_params 00:19:29.195 ************************************ 00:19:29.195 12:27:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:29.195 12:27:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:29.195 12:27:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.195 12:27:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 ************************************ 00:19:29.195 START TEST fio_dif_digest 00:19:29.195 ************************************ 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:29.195 bdev_null0 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.195 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:29.196 [2024-12-06 12:27:15.234138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:29.196 { 00:19:29.196 "params": { 00:19:29.196 "name": "Nvme$subsystem", 00:19:29.196 "trtype": "$TEST_TRANSPORT", 00:19:29.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.196 "adrfam": "ipv4", 00:19:29.196 "trsvcid": "$NVMF_PORT", 00:19:29.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.196 "hdgst": ${hdgst:-false}, 00:19:29.196 "ddgst": ${ddgst:-false} 00:19:29.196 }, 00:19:29.196 "method": "bdev_nvme_attach_controller" 00:19:29.196 } 00:19:29.196 EOF 00:19:29.196 )") 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:29.196 "params": { 00:19:29.196 "name": "Nvme0", 00:19:29.196 "trtype": "tcp", 00:19:29.196 "traddr": "10.0.0.3", 00:19:29.196 "adrfam": "ipv4", 00:19:29.196 "trsvcid": "4420", 00:19:29.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:29.196 "hdgst": true, 00:19:29.196 "ddgst": true 00:19:29.196 }, 00:19:29.196 "method": "bdev_nvme_attach_controller" 00:19:29.196 }' 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:29.196 12:27:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.196 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:29.196 ... 00:19:29.196 fio-3.35 00:19:29.196 Starting 3 threads 00:19:41.406 00:19:41.406 filename0: (groupid=0, jobs=1): err= 0: pid=82983: Fri Dec 6 12:27:25 2024 00:19:41.406 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(303MiB/10001msec) 00:19:41.406 slat (nsec): min=7190, max=46203, avg=13152.98, stdev=3968.93 00:19:41.406 clat (usec): min=11873, max=18801, avg=12363.80, stdev=428.77 00:19:41.406 lat (usec): min=11885, max=18831, avg=12376.95, stdev=428.98 00:19:41.406 clat percentiles (usec): 00:19:41.406 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12125], 00:19:41.406 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:19:41.406 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:19:41.406 | 99.00th=[13960], 99.50th=[14091], 99.90th=[18744], 99.95th=[18744], 00:19:41.406 | 99.99th=[18744] 00:19:41.406 bw ( KiB/s): min=30720, max=31488, per=33.29%, avg=30962.53, stdev=366.77, samples=19 00:19:41.406 iops : min= 240, max= 246, avg=241.89, stdev= 2.87, samples=19 00:19:41.406 lat (msec) : 20=100.00% 00:19:41.406 cpu : usr=91.46%, sys=7.98%, ctx=7, majf=0, minf=0 00:19:41.406 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.406 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.406 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:41.406 filename0: (groupid=0, jobs=1): err= 0: pid=82984: Fri Dec 6 12:27:25 2024 00:19:41.406 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10003msec) 00:19:41.406 slat (nsec): min=7073, max=54991, avg=13817.93, stdev=4219.86 00:19:41.406 clat (usec): min=8467, max=14148, avg=12348.25, stdev=388.18 00:19:41.406 lat (usec): min=8475, max=14163, avg=12362.07, stdev=388.56 00:19:41.406 clat percentiles (usec): 00:19:41.406 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12125], 00:19:41.406 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:19:41.406 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:19:41.406 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14091], 99.95th=[14091], 00:19:41.406 | 99.99th=[14091] 00:19:41.406 bw ( KiB/s): min=30720, max=31488, per=33.33%, avg=31002.95, stdev=380.62, samples=19 00:19:41.406 iops : min= 240, max= 246, avg=242.21, stdev= 2.97, samples=19 00:19:41.406 lat (msec) : 10=0.12%, 20=99.88% 00:19:41.406 cpu : usr=92.26%, sys=7.16%, ctx=6, majf=0, minf=0 00:19:41.406 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.406 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.406 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:41.406 filename0: (groupid=0, jobs=1): err= 0: pid=82985: Fri Dec 6 12:27:25 2024 00:19:41.406 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10003msec) 00:19:41.406 slat (nsec): min=6947, max=44304, avg=9637.55, stdev=3800.06 00:19:41.406 clat (usec): min=6993, max=15379, avg=12356.09, stdev=425.15 00:19:41.406 lat (usec): min=7000, max=15406, avg=12365.73, stdev=425.44 00:19:41.406 clat percentiles (usec): 00:19:41.406 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12125], 00:19:41.406 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:19:41.406 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[13173], 00:19:41.406 | 99.00th=[13960], 99.50th=[14091], 99.90th=[15401], 99.95th=[15401], 00:19:41.406 | 99.99th=[15401] 00:19:41.406 bw ( KiB/s): min=30720, max=31488, per=33.32%, avg=30991.85, stdev=373.77, samples=20 00:19:41.406 iops : min= 240, max= 246, avg=242.10, stdev= 2.94, samples=20 00:19:41.406 lat (msec) : 10=0.12%, 20=99.88% 00:19:41.406 cpu : usr=91.70%, sys=7.72%, ctx=90, majf=0, minf=0 00:19:41.406 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.406 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.406 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:41.406 00:19:41.406 Run status group 0 (all jobs): 00:19:41.406 READ: bw=90.8MiB/s (95.2MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.8MB/s), io=909MiB (953MB), run=10001-10003msec 00:19:41.406 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:41.406 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:41.406 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.406 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:41.406 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.407 00:19:41.407 real 0m10.900s 00:19:41.407 user 0m28.148s 00:19:41.407 sys 0m2.497s 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.407 12:27:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:41.407 ************************************ 00:19:41.407 END TEST fio_dif_digest 00:19:41.407 ************************************ 00:19:41.407 12:27:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:41.407 12:27:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.407 rmmod nvme_tcp 00:19:41.407 rmmod nvme_fabrics 00:19:41.407 rmmod nvme_keyring 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82240 ']' 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82240 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82240 ']' 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82240 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82240 00:19:41.407 killing process with pid 82240 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82240' 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82240 00:19:41.407 12:27:26 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82240 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:41.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.407 Waiting for block devices as requested 00:19:41.407 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.407 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:41.407 12:27:26 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.407 12:27:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:41.407 12:27:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.407 12:27:27 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:19:41.407 00:19:41.407 real 0m58.652s 00:19:41.407 user 3m46.453s 00:19:41.407 sys 0m19.176s 00:19:41.407 12:27:27 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.407 ************************************ 00:19:41.407 12:27:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:41.407 END TEST nvmf_dif 00:19:41.407 ************************************ 00:19:41.407 12:27:27 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:41.407 12:27:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:41.407 12:27:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.407 12:27:27 -- common/autotest_common.sh@10 -- # set +x 00:19:41.407 ************************************ 00:19:41.407 START TEST nvmf_abort_qd_sizes 00:19:41.407 ************************************ 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:41.407 * Looking for test storage... 00:19:41.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:41.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.407 --rc genhtml_branch_coverage=1 00:19:41.407 --rc genhtml_function_coverage=1 00:19:41.407 --rc genhtml_legend=1 00:19:41.407 --rc geninfo_all_blocks=1 00:19:41.407 --rc geninfo_unexecuted_blocks=1 00:19:41.407 00:19:41.407 ' 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:41.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.407 --rc genhtml_branch_coverage=1 00:19:41.407 --rc genhtml_function_coverage=1 00:19:41.407 --rc genhtml_legend=1 00:19:41.407 --rc geninfo_all_blocks=1 00:19:41.407 --rc geninfo_unexecuted_blocks=1 00:19:41.407 00:19:41.407 ' 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:41.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.407 --rc genhtml_branch_coverage=1 00:19:41.407 --rc genhtml_function_coverage=1 00:19:41.407 --rc genhtml_legend=1 00:19:41.407 --rc geninfo_all_blocks=1 00:19:41.407 --rc geninfo_unexecuted_blocks=1 00:19:41.407 00:19:41.407 ' 00:19:41.407 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:41.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.407 --rc genhtml_branch_coverage=1 00:19:41.407 --rc genhtml_function_coverage=1 00:19:41.407 --rc genhtml_legend=1 00:19:41.407 --rc geninfo_all_blocks=1 00:19:41.408 --rc geninfo_unexecuted_blocks=1 00:19:41.408 00:19:41.408 ' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:41.408 Cannot find device "nvmf_init_br" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:41.408 Cannot find device "nvmf_init_br2" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:41.408 Cannot find device "nvmf_tgt_br" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.408 Cannot find device "nvmf_tgt_br2" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:41.408 Cannot find device "nvmf_init_br" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:41.408 Cannot find device "nvmf_init_br2" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:41.408 Cannot find device "nvmf_tgt_br" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:41.408 Cannot find device "nvmf_tgt_br2" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:41.408 Cannot find device "nvmf_br" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:41.408 Cannot find device "nvmf_init_if" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:41.408 Cannot find device "nvmf_init_if2" 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.408 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:41.409 00:19:41.409 --- 10.0.0.3 ping statistics --- 00:19:41.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.409 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.409 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.409 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:41.409 00:19:41.409 --- 10.0.0.4 ping statistics --- 00:19:41.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.409 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:41.409 00:19:41.409 --- 10.0.0.1 ping statistics --- 00:19:41.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.409 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:19:41.409 00:19:41.409 --- 10.0.0.2 ping statistics --- 00:19:41.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.409 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:41.409 12:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:41.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.237 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:42.237 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.237 12:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:42.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=83624 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 83624 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 83624 ']' 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.238 12:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:42.238 [2024-12-06 12:27:28.891359] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:42.238 [2024-12-06 12:27:28.891626] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.497 [2024-12-06 12:27:29.044437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.497 [2024-12-06 12:27:29.086014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.497 [2024-12-06 12:27:29.086259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.497 [2024-12-06 12:27:29.086428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.497 [2024-12-06 12:27:29.086730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.497 [2024-12-06 12:27:29.086883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.497 [2024-12-06 12:27:29.087857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.497 [2024-12-06 12:27:29.088020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.497 [2024-12-06 12:27:29.088412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.497 [2024-12-06 12:27:29.088420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.497 [2024-12-06 12:27:29.125209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.757 12:27:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 ************************************ 00:19:42.757 START TEST spdk_target_abort 00:19:42.757 ************************************ 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 spdk_targetn1 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 [2024-12-06 12:27:29.345935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.757 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:42.758 [2024-12-06 12:27:29.386708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:42.758 12:27:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:46.047 Initializing NVMe Controllers 00:19:46.047 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:46.047 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:46.047 Initialization complete. Launching workers. 00:19:46.047 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9921, failed: 0 00:19:46.047 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1077, failed to submit 8844 00:19:46.047 success 823, unsuccessful 254, failed 0 00:19:46.047 12:27:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:46.047 12:27:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:50.234 Initializing NVMe Controllers 00:19:50.234 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:50.234 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:50.234 Initialization complete. Launching workers. 00:19:50.234 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:19:50.234 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1139, failed to submit 7861 00:19:50.234 success 408, unsuccessful 731, failed 0 00:19:50.234 12:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:50.234 12:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:52.766 Initializing NVMe Controllers 00:19:52.766 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:52.766 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:52.766 Initialization complete. Launching workers. 00:19:52.766 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31442, failed: 0 00:19:52.766 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2352, failed to submit 29090 00:19:52.766 success 494, unsuccessful 1858, failed 0 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.766 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83624 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 83624 ']' 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 83624 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83624 00:19:53.333 killing process with pid 83624 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83624' 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 83624 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 83624 00:19:53.333 ************************************ 00:19:53.333 END TEST spdk_target_abort 00:19:53.333 ************************************ 00:19:53.333 00:19:53.333 real 0m10.715s 00:19:53.333 user 0m41.141s 00:19:53.333 sys 0m2.068s 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.333 12:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:53.592 12:27:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:19:53.592 12:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:53.592 12:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.592 12:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:53.592 ************************************ 00:19:53.592 START TEST kernel_target_abort 00:19:53.592 ************************************ 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:53.592 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:53.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:53.850 Waiting for block devices as requested 00:19:53.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:54.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:54.108 No valid GPT data, bailing 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:54.108 No valid GPT data, bailing 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:54.108 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:54.367 No valid GPT data, bailing 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:54.367 No valid GPT data, bailing 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 --hostid=539e2455-b2a8-46ce-bfce-40a317783b05 -a 10.0.0.1 -t tcp -s 4420 00:19:54.367 00:19:54.367 Discovery Log Number of Records 2, Generation counter 2 00:19:54.367 =====Discovery Log Entry 0====== 00:19:54.367 trtype: tcp 00:19:54.367 adrfam: ipv4 00:19:54.367 subtype: current discovery subsystem 00:19:54.367 treq: not specified, sq flow control disable supported 00:19:54.367 portid: 1 00:19:54.367 trsvcid: 4420 00:19:54.367 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:54.367 traddr: 10.0.0.1 00:19:54.367 eflags: none 00:19:54.367 sectype: none 00:19:54.367 =====Discovery Log Entry 1====== 00:19:54.367 trtype: tcp 00:19:54.367 adrfam: ipv4 00:19:54.367 subtype: nvme subsystem 00:19:54.367 treq: not specified, sq flow control disable supported 00:19:54.367 portid: 1 00:19:54.367 trsvcid: 4420 00:19:54.367 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:54.367 traddr: 10.0.0.1 00:19:54.367 eflags: none 00:19:54.367 sectype: none 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:54.367 12:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:57.655 Initializing NVMe Controllers 00:19:57.655 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:57.655 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:57.655 Initialization complete. Launching workers. 00:19:57.655 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34332, failed: 0 00:19:57.655 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34332, failed to submit 0 00:19:57.655 success 0, unsuccessful 34332, failed 0 00:19:57.655 12:27:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:57.655 12:27:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:00.958 Initializing NVMe Controllers 00:20:00.958 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:00.958 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:00.958 Initialization complete. Launching workers. 00:20:00.958 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63412, failed: 0 00:20:00.958 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26249, failed to submit 37163 00:20:00.958 success 0, unsuccessful 26249, failed 0 00:20:00.958 12:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:00.958 12:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:04.252 Initializing NVMe Controllers 00:20:04.253 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:04.253 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:04.253 Initialization complete. Launching workers. 00:20:04.253 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69510, failed: 0 00:20:04.253 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17342, failed to submit 52168 00:20:04.253 success 0, unsuccessful 17342, failed 0 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:04.253 12:27:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:04.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.203 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:06.203 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:06.203 00:20:06.203 real 0m12.587s 00:20:06.203 user 0m5.924s 00:20:06.203 sys 0m4.132s 00:20:06.203 12:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.203 12:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:06.203 ************************************ 00:20:06.203 END TEST kernel_target_abort 00:20:06.203 ************************************ 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:06.203 rmmod nvme_tcp 00:20:06.203 rmmod nvme_fabrics 00:20:06.203 rmmod nvme_keyring 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 83624 ']' 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 83624 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 83624 ']' 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 83624 00:20:06.203 Process with pid 83624 is not found 00:20:06.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83624) - No such process 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 83624 is not found' 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:06.203 12:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:06.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.778 Waiting for block devices as requested 00:20:06.778 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:06.778 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:20:07.081 00:20:07.081 real 0m26.358s 00:20:07.081 user 0m48.248s 00:20:07.081 sys 0m7.676s 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.081 12:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:07.081 ************************************ 00:20:07.081 END TEST nvmf_abort_qd_sizes 00:20:07.081 ************************************ 00:20:07.081 12:27:53 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:07.081 12:27:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.081 12:27:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.081 12:27:53 -- common/autotest_common.sh@10 -- # set +x 00:20:07.081 ************************************ 00:20:07.081 START TEST keyring_file 00:20:07.081 ************************************ 00:20:07.081 12:27:53 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:07.341 * Looking for test storage... 00:20:07.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:07.341 12:27:53 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:07.341 12:27:53 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:20:07.341 12:27:53 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:07.341 12:27:53 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:07.341 12:27:53 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@345 -- # : 1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@353 -- # local d=1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@355 -- # echo 1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@353 -- # local d=2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@355 -- # echo 2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@368 -- # return 0 00:20:07.342 12:27:53 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.342 12:27:53 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.342 --rc genhtml_branch_coverage=1 00:20:07.342 --rc genhtml_function_coverage=1 00:20:07.342 --rc genhtml_legend=1 00:20:07.342 --rc geninfo_all_blocks=1 00:20:07.342 --rc geninfo_unexecuted_blocks=1 00:20:07.342 00:20:07.342 ' 00:20:07.342 12:27:53 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.342 --rc genhtml_branch_coverage=1 00:20:07.342 --rc genhtml_function_coverage=1 00:20:07.342 --rc genhtml_legend=1 00:20:07.342 --rc geninfo_all_blocks=1 00:20:07.342 --rc geninfo_unexecuted_blocks=1 00:20:07.342 00:20:07.342 ' 00:20:07.342 12:27:53 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.342 --rc genhtml_branch_coverage=1 00:20:07.342 --rc genhtml_function_coverage=1 00:20:07.342 --rc genhtml_legend=1 00:20:07.342 --rc geninfo_all_blocks=1 00:20:07.342 --rc geninfo_unexecuted_blocks=1 00:20:07.342 00:20:07.342 ' 00:20:07.342 12:27:53 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.342 --rc genhtml_branch_coverage=1 00:20:07.342 --rc genhtml_function_coverage=1 00:20:07.342 --rc genhtml_legend=1 00:20:07.342 --rc geninfo_all_blocks=1 00:20:07.342 --rc geninfo_unexecuted_blocks=1 00:20:07.342 00:20:07.342 ' 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.342 12:27:53 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.342 12:27:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.342 12:27:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.342 12:27:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.342 12:27:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:07.342 12:27:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@51 -- # : 0 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ThuszzSozD 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:07.342 12:27:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ThuszzSozD 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ThuszzSozD 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ThuszzSozD 00:20:07.342 12:27:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:07.342 12:27:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:07.343 12:27:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:07.343 12:27:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XMvHMoo3Wd 00:20:07.343 12:27:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:07.343 12:27:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:07.343 12:27:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:07.343 12:27:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:07.343 12:27:53 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:07.343 12:27:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:07.343 12:27:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:07.602 12:27:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XMvHMoo3Wd 00:20:07.602 12:27:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XMvHMoo3Wd 00:20:07.602 12:27:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XMvHMoo3Wd 00:20:07.602 12:27:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=84533 00:20:07.602 12:27:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84533 00:20:07.602 12:27:54 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.602 12:27:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84533 ']' 00:20:07.602 12:27:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.602 12:27:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.602 12:27:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.602 12:27:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.602 12:27:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:07.602 [2024-12-06 12:27:54.110823] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:07.602 [2024-12-06 12:27:54.110922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84533 ] 00:20:07.861 [2024-12-06 12:27:54.258678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.861 [2024-12-06 12:27:54.298734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.861 [2024-12-06 12:27:54.347736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:07.861 12:27:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.861 12:27:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:07.861 12:27:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:07.861 12:27:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.861 12:27:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:07.861 [2024-12-06 12:27:54.503950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.121 null0 00:20:08.121 [2024-12-06 12:27:54.535920] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.121 [2024-12-06 12:27:54.536121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.121 12:27:54 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:08.121 [2024-12-06 12:27:54.567864] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:08.121 request: 00:20:08.121 { 00:20:08.121 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.121 "secure_channel": false, 00:20:08.121 "listen_address": { 00:20:08.121 "trtype": "tcp", 00:20:08.121 "traddr": "127.0.0.1", 00:20:08.121 "trsvcid": "4420" 00:20:08.121 }, 00:20:08.121 "method": "nvmf_subsystem_add_listener", 00:20:08.121 "req_id": 1 00:20:08.121 } 00:20:08.121 Got JSON-RPC error response 00:20:08.121 response: 00:20:08.121 { 00:20:08.121 "code": -32602, 00:20:08.121 "message": "Invalid parameters" 00:20:08.121 } 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.121 12:27:54 keyring_file -- keyring/file.sh@47 -- # bperfpid=84543 00:20:08.121 12:27:54 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:08.121 12:27:54 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84543 /var/tmp/bperf.sock 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84543 ']' 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.121 12:27:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:08.121 [2024-12-06 12:27:54.623034] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:08.121 [2024-12-06 12:27:54.623111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84543 ] 00:20:08.121 [2024-12-06 12:27:54.764330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.380 [2024-12-06 12:27:54.802924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.380 [2024-12-06 12:27:54.833901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:08.380 12:27:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.380 12:27:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:08.380 12:27:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:08.380 12:27:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:08.637 12:27:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XMvHMoo3Wd 00:20:08.637 12:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XMvHMoo3Wd 00:20:08.896 12:27:55 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:20:08.896 12:27:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:08.896 12:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:08.896 12:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:08.896 12:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:09.154 12:27:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ThuszzSozD == \/\t\m\p\/\t\m\p\.\T\h\u\s\z\z\S\o\z\D ]] 00:20:09.154 12:27:55 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:20:09.154 12:27:55 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:20:09.154 12:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:09.154 12:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:09.154 12:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:09.413 12:27:55 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.XMvHMoo3Wd == \/\t\m\p\/\t\m\p\.\X\M\v\H\M\o\o\3\W\d ]] 00:20:09.413 12:27:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:20:09.413 12:27:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:09.413 12:27:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:09.413 12:27:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:09.413 12:27:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:09.413 12:27:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:09.672 12:27:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:09.672 12:27:56 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:20:09.672 12:27:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:09.672 12:27:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:09.672 12:27:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:09.672 12:27:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:09.672 12:27:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:09.931 12:27:56 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:20:09.931 12:27:56 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:09.931 12:27:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:10.189 [2024-12-06 12:27:56.747888] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.189 nvme0n1 00:20:10.189 12:27:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:20:10.189 12:27:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:10.189 12:27:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:10.189 12:27:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:10.189 12:27:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:10.189 12:27:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:10.447 12:27:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:20:10.447 12:27:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:20:10.447 12:27:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:10.447 12:27:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:10.447 12:27:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:10.447 12:27:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:10.447 12:27:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:10.706 12:27:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:20:10.706 12:27:57 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:10.965 Running I/O for 1 seconds... 00:20:11.902 14176.00 IOPS, 55.38 MiB/s 00:20:11.902 Latency(us) 00:20:11.902 [2024-12-06T12:27:58.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.902 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:11.902 nvme0n1 : 1.01 14217.19 55.54 0.00 0.00 8980.56 4498.15 17158.52 00:20:11.902 [2024-12-06T12:27:58.560Z] =================================================================================================================== 00:20:11.902 [2024-12-06T12:27:58.560Z] Total : 14217.19 55.54 0.00 0.00 8980.56 4498.15 17158.52 00:20:11.902 { 00:20:11.902 "results": [ 00:20:11.902 { 00:20:11.903 "job": "nvme0n1", 00:20:11.903 "core_mask": "0x2", 00:20:11.903 "workload": "randrw", 00:20:11.903 "percentage": 50, 00:20:11.903 "status": "finished", 00:20:11.903 "queue_depth": 128, 00:20:11.903 "io_size": 4096, 00:20:11.903 "runtime": 1.006247, 00:20:11.903 "iops": 14217.185243782093, 00:20:11.903 "mibps": 55.5358798585238, 00:20:11.903 "io_failed": 0, 00:20:11.903 "io_timeout": 0, 00:20:11.903 "avg_latency_us": 8980.558215624724, 00:20:11.903 "min_latency_us": 4498.152727272727, 00:20:11.903 "max_latency_us": 17158.516363636365 00:20:11.903 } 00:20:11.903 ], 00:20:11.903 "core_count": 1 00:20:11.903 } 00:20:11.903 12:27:58 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:11.903 12:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:12.162 12:27:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:20:12.162 12:27:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:12.162 12:27:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:12.162 12:27:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:12.162 12:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:12.162 12:27:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:12.420 12:27:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:12.420 12:27:58 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:20:12.420 12:27:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:12.420 12:27:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:12.420 12:27:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:12.420 12:27:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:12.420 12:27:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:12.679 12:27:59 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:20:12.679 12:27:59 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.679 12:27:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:12.679 12:27:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:12.938 [2024-12-06 12:27:59.479848] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.938 [2024-12-06 12:27:59.480509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a15d0 (107): Transport endpoint is not connected 00:20:12.938 [2024-12-06 12:27:59.481499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a15d0 (9): Bad file descriptor 00:20:12.938 [2024-12-06 12:27:59.482501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:12.938 [2024-12-06 12:27:59.482534] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:12.938 [2024-12-06 12:27:59.482558] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:12.938 [2024-12-06 12:27:59.482566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:12.938 request: 00:20:12.938 { 00:20:12.938 "name": "nvme0", 00:20:12.938 "trtype": "tcp", 00:20:12.938 "traddr": "127.0.0.1", 00:20:12.938 "adrfam": "ipv4", 00:20:12.938 "trsvcid": "4420", 00:20:12.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:12.938 "prchk_reftag": false, 00:20:12.938 "prchk_guard": false, 00:20:12.938 "hdgst": false, 00:20:12.938 "ddgst": false, 00:20:12.938 "psk": "key1", 00:20:12.938 "allow_unrecognized_csi": false, 00:20:12.939 "method": "bdev_nvme_attach_controller", 00:20:12.939 "req_id": 1 00:20:12.939 } 00:20:12.939 Got JSON-RPC error response 00:20:12.939 response: 00:20:12.939 { 00:20:12.939 "code": -5, 00:20:12.939 "message": "Input/output error" 00:20:12.939 } 00:20:12.939 12:27:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:12.939 12:27:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.939 12:27:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.939 12:27:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.939 12:27:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:20:12.939 12:27:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:12.939 12:27:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:12.939 12:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:12.939 12:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:12.939 12:27:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:13.197 12:27:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:13.197 12:27:59 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:20:13.197 12:27:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:13.197 12:27:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:13.197 12:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:13.197 12:27:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:13.197 12:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:13.456 12:28:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:20:13.456 12:28:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:20:13.456 12:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:13.715 12:28:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:20:13.715 12:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:13.974 12:28:00 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:20:13.974 12:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:13.974 12:28:00 keyring_file -- keyring/file.sh@78 -- # jq length 00:20:14.233 12:28:00 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:20:14.233 12:28:00 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ThuszzSozD 00:20:14.233 12:28:00 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.233 12:28:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:14.233 12:28:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:14.492 [2024-12-06 12:28:01.032975] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ThuszzSozD': 0100660 00:20:14.492 [2024-12-06 12:28:01.033011] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:14.492 request: 00:20:14.492 { 00:20:14.492 "name": "key0", 00:20:14.492 "path": "/tmp/tmp.ThuszzSozD", 00:20:14.492 "method": "keyring_file_add_key", 00:20:14.492 "req_id": 1 00:20:14.492 } 00:20:14.492 Got JSON-RPC error response 00:20:14.492 response: 00:20:14.492 { 00:20:14.492 "code": -1, 00:20:14.492 "message": "Operation not permitted" 00:20:14.492 } 00:20:14.492 12:28:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:14.492 12:28:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.492 12:28:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.492 12:28:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.492 12:28:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ThuszzSozD 00:20:14.492 12:28:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:14.492 12:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ThuszzSozD 00:20:14.750 12:28:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ThuszzSozD 00:20:14.750 12:28:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:20:14.751 12:28:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:14.751 12:28:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:14.751 12:28:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:14.751 12:28:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:14.751 12:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:15.009 12:28:01 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:20:15.009 12:28:01 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.009 12:28:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.009 12:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.268 [2024-12-06 12:28:01.825120] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ThuszzSozD': No such file or directory 00:20:15.268 [2024-12-06 12:28:01.825155] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:15.268 [2024-12-06 12:28:01.825216] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:15.268 [2024-12-06 12:28:01.825227] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:20:15.268 [2024-12-06 12:28:01.825235] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:15.268 [2024-12-06 12:28:01.825259] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:15.268 request: 00:20:15.268 { 00:20:15.268 "name": "nvme0", 00:20:15.268 "trtype": "tcp", 00:20:15.268 "traddr": "127.0.0.1", 00:20:15.268 "adrfam": "ipv4", 00:20:15.268 "trsvcid": "4420", 00:20:15.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:15.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:15.268 "prchk_reftag": false, 00:20:15.268 "prchk_guard": false, 00:20:15.268 "hdgst": false, 00:20:15.268 "ddgst": false, 00:20:15.268 "psk": "key0", 00:20:15.268 "allow_unrecognized_csi": false, 00:20:15.268 "method": "bdev_nvme_attach_controller", 00:20:15.268 "req_id": 1 00:20:15.268 } 00:20:15.268 Got JSON-RPC error response 00:20:15.268 response: 00:20:15.268 { 00:20:15.268 "code": -19, 00:20:15.268 "message": "No such device" 00:20:15.268 } 00:20:15.268 12:28:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:15.268 12:28:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.268 12:28:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.268 12:28:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.268 12:28:01 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:20:15.268 12:28:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:15.527 12:28:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DFH7kccIfk 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:15.527 12:28:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:15.527 12:28:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.527 12:28:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:15.527 12:28:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:15.527 12:28:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:15.527 12:28:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DFH7kccIfk 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DFH7kccIfk 00:20:15.527 12:28:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.DFH7kccIfk 00:20:15.527 12:28:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFH7kccIfk 00:20:15.527 12:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DFH7kccIfk 00:20:15.786 12:28:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:15.786 12:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:16.045 nvme0n1 00:20:16.045 12:28:02 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:20:16.045 12:28:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:16.045 12:28:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:16.046 12:28:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:16.046 12:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:16.304 12:28:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:16.304 12:28:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:20:16.304 12:28:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:20:16.304 12:28:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:16.563 12:28:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:20:16.563 12:28:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:20:16.563 12:28:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:16.563 12:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:16.563 12:28:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:17.130 12:28:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:20:17.130 12:28:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:20:17.130 12:28:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:17.130 12:28:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:17.130 12:28:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:17.130 12:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:17.130 12:28:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:17.130 12:28:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:20:17.130 12:28:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:17.130 12:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:17.389 12:28:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:20:17.389 12:28:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:17.389 12:28:03 keyring_file -- keyring/file.sh@105 -- # jq length 00:20:17.648 12:28:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:20:17.648 12:28:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFH7kccIfk 00:20:17.648 12:28:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DFH7kccIfk 00:20:17.907 12:28:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XMvHMoo3Wd 00:20:17.907 12:28:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XMvHMoo3Wd 00:20:18.165 12:28:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:18.165 12:28:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:18.424 nvme0n1 00:20:18.681 12:28:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:20:18.681 12:28:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:18.940 12:28:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:20:18.940 "subsystems": [ 00:20:18.940 { 00:20:18.940 "subsystem": "keyring", 00:20:18.940 "config": [ 00:20:18.940 { 00:20:18.940 "method": "keyring_file_add_key", 00:20:18.940 "params": { 00:20:18.940 "name": "key0", 00:20:18.940 "path": "/tmp/tmp.DFH7kccIfk" 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "keyring_file_add_key", 00:20:18.940 "params": { 00:20:18.940 "name": "key1", 00:20:18.940 "path": "/tmp/tmp.XMvHMoo3Wd" 00:20:18.940 } 00:20:18.940 } 00:20:18.940 ] 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "subsystem": "iobuf", 00:20:18.940 "config": [ 00:20:18.940 { 00:20:18.940 "method": "iobuf_set_options", 00:20:18.940 "params": { 00:20:18.940 "small_pool_count": 8192, 00:20:18.940 "large_pool_count": 1024, 00:20:18.940 "small_bufsize": 8192, 00:20:18.940 "large_bufsize": 135168, 00:20:18.940 "enable_numa": false 00:20:18.940 } 00:20:18.940 } 00:20:18.940 ] 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "subsystem": "sock", 00:20:18.940 "config": [ 00:20:18.940 { 00:20:18.940 "method": "sock_set_default_impl", 00:20:18.940 "params": { 00:20:18.940 "impl_name": "uring" 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "sock_impl_set_options", 00:20:18.940 "params": { 00:20:18.940 "impl_name": "ssl", 00:20:18.940 "recv_buf_size": 4096, 00:20:18.940 "send_buf_size": 4096, 00:20:18.940 "enable_recv_pipe": true, 00:20:18.940 "enable_quickack": false, 00:20:18.940 "enable_placement_id": 0, 00:20:18.940 "enable_zerocopy_send_server": true, 00:20:18.940 "enable_zerocopy_send_client": false, 00:20:18.940 "zerocopy_threshold": 0, 00:20:18.940 "tls_version": 0, 00:20:18.940 "enable_ktls": false 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "sock_impl_set_options", 00:20:18.940 "params": { 00:20:18.940 "impl_name": "posix", 00:20:18.940 "recv_buf_size": 2097152, 00:20:18.940 "send_buf_size": 2097152, 00:20:18.940 "enable_recv_pipe": true, 00:20:18.940 "enable_quickack": false, 00:20:18.940 "enable_placement_id": 0, 00:20:18.940 "enable_zerocopy_send_server": true, 00:20:18.940 "enable_zerocopy_send_client": false, 00:20:18.940 "zerocopy_threshold": 0, 00:20:18.940 "tls_version": 0, 00:20:18.940 "enable_ktls": false 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "sock_impl_set_options", 00:20:18.940 "params": { 00:20:18.940 "impl_name": "uring", 00:20:18.940 "recv_buf_size": 2097152, 00:20:18.940 "send_buf_size": 2097152, 00:20:18.940 "enable_recv_pipe": true, 00:20:18.940 "enable_quickack": false, 00:20:18.940 "enable_placement_id": 0, 00:20:18.940 "enable_zerocopy_send_server": false, 00:20:18.940 "enable_zerocopy_send_client": false, 00:20:18.940 "zerocopy_threshold": 0, 00:20:18.940 "tls_version": 0, 00:20:18.940 "enable_ktls": false 00:20:18.940 } 00:20:18.940 } 00:20:18.940 ] 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "subsystem": "vmd", 00:20:18.940 "config": [] 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "subsystem": "accel", 00:20:18.940 "config": [ 00:20:18.940 { 00:20:18.940 "method": "accel_set_options", 00:20:18.940 "params": { 00:20:18.940 "small_cache_size": 128, 00:20:18.940 "large_cache_size": 16, 00:20:18.940 "task_count": 2048, 00:20:18.940 "sequence_count": 2048, 00:20:18.940 "buf_count": 2048 00:20:18.940 } 00:20:18.940 } 00:20:18.940 ] 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "subsystem": "bdev", 00:20:18.940 "config": [ 00:20:18.940 { 00:20:18.940 "method": "bdev_set_options", 00:20:18.940 "params": { 00:20:18.940 "bdev_io_pool_size": 65535, 00:20:18.940 "bdev_io_cache_size": 256, 00:20:18.940 "bdev_auto_examine": true, 00:20:18.940 "iobuf_small_cache_size": 128, 00:20:18.940 "iobuf_large_cache_size": 16 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "bdev_raid_set_options", 00:20:18.940 "params": { 00:20:18.940 "process_window_size_kb": 1024, 00:20:18.940 "process_max_bandwidth_mb_sec": 0 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "bdev_iscsi_set_options", 00:20:18.940 "params": { 00:20:18.940 "timeout_sec": 30 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "bdev_nvme_set_options", 00:20:18.940 "params": { 00:20:18.940 "action_on_timeout": "none", 00:20:18.940 "timeout_us": 0, 00:20:18.940 "timeout_admin_us": 0, 00:20:18.940 "keep_alive_timeout_ms": 10000, 00:20:18.940 "arbitration_burst": 0, 00:20:18.940 "low_priority_weight": 0, 00:20:18.940 "medium_priority_weight": 0, 00:20:18.940 "high_priority_weight": 0, 00:20:18.940 "nvme_adminq_poll_period_us": 10000, 00:20:18.940 "nvme_ioq_poll_period_us": 0, 00:20:18.940 "io_queue_requests": 512, 00:20:18.940 "delay_cmd_submit": true, 00:20:18.940 "transport_retry_count": 4, 00:20:18.940 "bdev_retry_count": 3, 00:20:18.940 "transport_ack_timeout": 0, 00:20:18.940 "ctrlr_loss_timeout_sec": 0, 00:20:18.940 "reconnect_delay_sec": 0, 00:20:18.940 "fast_io_fail_timeout_sec": 0, 00:20:18.940 "disable_auto_failback": false, 00:20:18.940 "generate_uuids": false, 00:20:18.940 "transport_tos": 0, 00:20:18.940 "nvme_error_stat": false, 00:20:18.940 "rdma_srq_size": 0, 00:20:18.940 "io_path_stat": false, 00:20:18.940 "allow_accel_sequence": false, 00:20:18.940 "rdma_max_cq_size": 0, 00:20:18.940 "rdma_cm_event_timeout_ms": 0, 00:20:18.940 "dhchap_digests": [ 00:20:18.940 "sha256", 00:20:18.940 "sha384", 00:20:18.940 "sha512" 00:20:18.940 ], 00:20:18.940 "dhchap_dhgroups": [ 00:20:18.940 "null", 00:20:18.940 "ffdhe2048", 00:20:18.940 "ffdhe3072", 00:20:18.940 "ffdhe4096", 00:20:18.940 "ffdhe6144", 00:20:18.940 "ffdhe8192" 00:20:18.940 ] 00:20:18.940 } 00:20:18.940 }, 00:20:18.940 { 00:20:18.940 "method": "bdev_nvme_attach_controller", 00:20:18.940 "params": { 00:20:18.940 "name": "nvme0", 00:20:18.940 "trtype": "TCP", 00:20:18.940 "adrfam": "IPv4", 00:20:18.940 "traddr": "127.0.0.1", 00:20:18.940 "trsvcid": "4420", 00:20:18.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.940 "prchk_reftag": false, 00:20:18.940 "prchk_guard": false, 00:20:18.940 "ctrlr_loss_timeout_sec": 0, 00:20:18.940 "reconnect_delay_sec": 0, 00:20:18.940 "fast_io_fail_timeout_sec": 0, 00:20:18.940 "psk": "key0", 00:20:18.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.941 "hdgst": false, 00:20:18.941 "ddgst": false, 00:20:18.941 "multipath": "multipath" 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "bdev_nvme_set_hotplug", 00:20:18.941 "params": { 00:20:18.941 "period_us": 100000, 00:20:18.941 "enable": false 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "bdev_wait_for_examine" 00:20:18.941 } 00:20:18.941 ] 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "subsystem": "nbd", 00:20:18.941 "config": [] 00:20:18.941 } 00:20:18.941 ] 00:20:18.941 }' 00:20:18.941 12:28:05 keyring_file -- keyring/file.sh@115 -- # killprocess 84543 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84543 ']' 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84543 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84543 00:20:18.941 killing process with pid 84543 00:20:18.941 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.941 00:20:18.941 Latency(us) 00:20:18.941 [2024-12-06T12:28:05.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.941 [2024-12-06T12:28:05.599Z] =================================================================================================================== 00:20:18.941 [2024-12-06T12:28:05.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84543' 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@973 -- # kill 84543 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@978 -- # wait 84543 00:20:18.941 12:28:05 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:18.941 12:28:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=84781 00:20:18.941 12:28:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84781 /var/tmp/bperf.sock 00:20:18.941 12:28:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84781 ']' 00:20:18.941 12:28:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:20:18.941 "subsystems": [ 00:20:18.941 { 00:20:18.941 "subsystem": "keyring", 00:20:18.941 "config": [ 00:20:18.941 { 00:20:18.941 "method": "keyring_file_add_key", 00:20:18.941 "params": { 00:20:18.941 "name": "key0", 00:20:18.941 "path": "/tmp/tmp.DFH7kccIfk" 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "keyring_file_add_key", 00:20:18.941 "params": { 00:20:18.941 "name": "key1", 00:20:18.941 "path": "/tmp/tmp.XMvHMoo3Wd" 00:20:18.941 } 00:20:18.941 } 00:20:18.941 ] 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "subsystem": "iobuf", 00:20:18.941 "config": [ 00:20:18.941 { 00:20:18.941 "method": "iobuf_set_options", 00:20:18.941 "params": { 00:20:18.941 "small_pool_count": 8192, 00:20:18.941 "large_pool_count": 1024, 00:20:18.941 "small_bufsize": 8192, 00:20:18.941 "large_bufsize": 135168, 00:20:18.941 "enable_numa": false 00:20:18.941 } 00:20:18.941 } 00:20:18.941 ] 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "subsystem": "sock", 00:20:18.941 "config": [ 00:20:18.941 { 00:20:18.941 "method": "sock_set_default_impl", 00:20:18.941 "params": { 00:20:18.941 "impl_name": "uring" 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "sock_impl_set_options", 00:20:18.941 "params": { 00:20:18.941 "impl_name": "ssl", 00:20:18.941 "recv_buf_size": 4096, 00:20:18.941 "send_buf_size": 4096, 00:20:18.941 "enable_recv_pipe": true, 00:20:18.941 "enable_quickack": false, 00:20:18.941 "enable_placement_id": 0, 00:20:18.941 "enable_zerocopy_send_server": true, 00:20:18.941 "enable_zerocopy_send_client": false, 00:20:18.941 "zerocopy_threshold": 0, 00:20:18.941 "tls_version": 0, 00:20:18.941 "enable_ktls": false 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "sock_impl_set_options", 00:20:18.941 "params": { 00:20:18.941 "impl_name": "posix", 00:20:18.941 "recv_buf_size": 2097152, 00:20:18.941 "send_buf_size": 2097152, 00:20:18.941 "enable_recv_pipe": true, 00:20:18.941 "enable_quickack": false, 00:20:18.941 "enable_placement_id": 0, 00:20:18.941 "enable_zerocopy_send_server": true, 00:20:18.941 "enable_zerocopy_send_client": false, 00:20:18.941 "zerocopy_threshold": 0, 00:20:18.941 "tls_version": 0, 00:20:18.941 "enable_ktls": false 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "sock_impl_set_options", 00:20:18.941 "params": { 00:20:18.941 "impl_name": "uring", 00:20:18.941 "recv_buf_size": 2097152, 00:20:18.941 "send_buf_size": 2097152, 00:20:18.941 "enable_recv_pipe": true, 00:20:18.941 "enable_quickack": false, 00:20:18.941 "enable_placement_id": 0, 00:20:18.941 "enable_zerocopy_send_server": false, 00:20:18.941 "enable_zerocopy_send_client": false, 00:20:18.941 "zerocopy_threshold": 0, 00:20:18.941 "tls_version": 0, 00:20:18.941 "enable_ktls": false 00:20:18.941 } 00:20:18.941 } 00:20:18.941 ] 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "subsystem": "vmd", 00:20:18.941 "config": [] 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "subsystem": "accel", 00:20:18.941 "config": [ 00:20:18.941 { 00:20:18.941 "method": "accel_set_options", 00:20:18.941 "params": { 00:20:18.941 "small_cache_size": 128, 00:20:18.941 "large_cache_size": 16, 00:20:18.941 "task_count": 2048, 00:20:18.941 "sequence_count": 2048, 00:20:18.941 "buf_count": 2048 00:20:18.941 } 00:20:18.941 } 00:20:18.941 ] 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "subsystem": "bdev", 00:20:18.941 "config": [ 00:20:18.941 { 00:20:18.941 "method": "bdev_set_options", 00:20:18.941 "params": { 00:20:18.941 "bdev_io_pool_size": 65535, 00:20:18.941 "bdev_io_cache_size": 256, 00:20:18.941 "bdev_auto_examine": true, 00:20:18.941 "iobuf_small_cache_size": 128, 00:20:18.941 "iobuf_large_cache_size": 16 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "bdev_raid_set_options", 00:20:18.941 "params": { 00:20:18.941 "process_window_size_kb": 1024, 00:20:18.941 "process_max_bandwidth_mb_sec": 0 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "bdev_iscsi_set_options", 00:20:18.941 "params": { 00:20:18.941 "timeout_sec": 30 00:20:18.941 } 00:20:18.941 }, 00:20:18.941 { 00:20:18.941 "method": "bdev_nvme_set_options", 00:20:18.941 "params": { 00:20:18.941 "action_on_timeout": "none", 00:20:18.941 "timeout_us": 0, 00:20:18.941 "timeout_admin_us": 0, 00:20:18.941 "keep_alive_timeout_ms": 10000, 00:20:18.941 "arbitration_burst": 0, 00:20:18.941 "low_priority_weight": 0, 00:20:18.941 "medium_priority_weight": 0, 00:20:18.941 "high_priority_weight": 0, 00:20:18.941 "nvme_adminq_poll_period_us": 10000, 00:20:19.200 "nvme_io 12:28:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:19.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:19.200 12:28:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.200 q_poll_period_us": 0, 00:20:19.200 "io_queue_requests": 512, 00:20:19.200 "delay_cmd_submit": true, 00:20:19.200 "transport_retry_count": 4, 00:20:19.200 "bdev_retry_count": 3, 00:20:19.200 "transport_ack_timeout": 0, 00:20:19.200 "ctrlr_loss_timeout_sec": 0, 00:20:19.200 "reconnect_delay_sec": 0, 00:20:19.200 "fast_io_fail_timeout_sec": 0, 00:20:19.200 "disable_auto_failback": false, 00:20:19.200 "generate_uuids": false, 00:20:19.200 "transport_tos": 0, 00:20:19.200 "nvme_error_stat": false, 00:20:19.200 "rdma_srq_size": 0, 00:20:19.200 "io_path_stat": false, 00:20:19.200 "allow_accel_sequence": false, 00:20:19.200 "rdma_max_cq_size": 0, 00:20:19.200 "rdma_cm_event_timeout_ms": 0, 00:20:19.200 "dhchap_digests": [ 00:20:19.200 "sha256", 00:20:19.200 "sha384", 00:20:19.200 "sha512" 00:20:19.200 ], 00:20:19.200 "dhchap_dhgroups": [ 00:20:19.200 "null", 00:20:19.200 "ffdhe2048", 00:20:19.200 "ffdhe3072", 00:20:19.200 "ffdhe4096", 00:20:19.200 "ffdhe6144", 00:20:19.200 "ffdhe8192" 00:20:19.200 ] 00:20:19.200 } 00:20:19.200 }, 00:20:19.200 { 00:20:19.200 "method": "bdev_nvme_attach_controller", 00:20:19.200 "params": { 00:20:19.200 "name": "nvme0", 00:20:19.200 "trtype": "TCP", 00:20:19.200 "adrfam": "IPv4", 00:20:19.200 "traddr": "127.0.0.1", 00:20:19.200 "trsvcid": "4420", 00:20:19.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.200 "prchk_reftag": false, 00:20:19.200 "prchk_guard": false, 00:20:19.200 "ctrlr_loss_timeout_sec": 0, 00:20:19.200 "reconnect_delay_sec": 0, 00:20:19.200 "fast_io_fail_timeout_sec": 0, 00:20:19.200 "psk": "key0", 00:20:19.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.200 "hdgst": false, 00:20:19.200 "ddgst": false, 00:20:19.200 "multipath": "multipath" 00:20:19.200 } 00:20:19.200 }, 00:20:19.200 { 00:20:19.200 "method": "bdev_nvme_set_hotplug", 00:20:19.200 "params": { 00:20:19.200 "period_us": 100000, 00:20:19.200 "enable": false 00:20:19.200 } 00:20:19.200 }, 00:20:19.200 { 00:20:19.200 "method": "bdev_wait_for_examine" 00:20:19.200 } 00:20:19.200 ] 00:20:19.200 }, 00:20:19.200 { 00:20:19.200 "subsystem": "nbd", 00:20:19.201 "config": [] 00:20:19.201 } 00:20:19.201 ] 00:20:19.201 }' 00:20:19.201 12:28:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:19.201 12:28:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.201 12:28:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:19.201 [2024-12-06 12:28:05.639376] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:19.201 [2024-12-06 12:28:05.639462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84781 ] 00:20:19.201 [2024-12-06 12:28:05.774087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.201 [2024-12-06 12:28:05.805501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.460 [2024-12-06 12:28:05.913696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:19.460 [2024-12-06 12:28:05.952954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.027 12:28:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.027 12:28:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:20.027 12:28:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:20:20.027 12:28:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:20:20.027 12:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.286 12:28:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:20.286 12:28:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:20:20.286 12:28:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:20.286 12:28:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:20.286 12:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:20.286 12:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:20.286 12:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.545 12:28:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:20:20.545 12:28:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:20:20.545 12:28:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:20.545 12:28:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:20.545 12:28:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:20.545 12:28:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:20.545 12:28:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:20.803 12:28:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:20:20.803 12:28:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:20:20.803 12:28:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:20:20.803 12:28:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:21.062 12:28:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:20:21.062 12:28:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:21.062 12:28:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.DFH7kccIfk /tmp/tmp.XMvHMoo3Wd 00:20:21.062 12:28:07 keyring_file -- keyring/file.sh@20 -- # killprocess 84781 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84781 ']' 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84781 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84781 00:20:21.062 killing process with pid 84781 00:20:21.062 Received shutdown signal, test time was about 1.000000 seconds 00:20:21.062 00:20:21.062 Latency(us) 00:20:21.062 [2024-12-06T12:28:07.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.062 [2024-12-06T12:28:07.720Z] =================================================================================================================== 00:20:21.062 [2024-12-06T12:28:07.720Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84781' 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@973 -- # kill 84781 00:20:21.062 12:28:07 keyring_file -- common/autotest_common.sh@978 -- # wait 84781 00:20:21.321 12:28:07 keyring_file -- keyring/file.sh@21 -- # killprocess 84533 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84533 ']' 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84533 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84533 00:20:21.321 killing process with pid 84533 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84533' 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@973 -- # kill 84533 00:20:21.321 12:28:07 keyring_file -- common/autotest_common.sh@978 -- # wait 84533 00:20:21.579 ************************************ 00:20:21.579 END TEST keyring_file 00:20:21.579 ************************************ 00:20:21.579 00:20:21.579 real 0m14.320s 00:20:21.579 user 0m37.012s 00:20:21.579 sys 0m2.663s 00:20:21.579 12:28:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.579 12:28:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:21.579 12:28:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:20:21.579 12:28:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:21.579 12:28:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.579 12:28:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.579 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:20:21.579 ************************************ 00:20:21.579 START TEST keyring_linux 00:20:21.579 ************************************ 00:20:21.579 12:28:08 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:21.579 Joined session keyring: 583270097 00:20:21.579 * Looking for test storage... 00:20:21.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:21.579 12:28:08 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:21.579 12:28:08 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:20:21.579 12:28:08 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:21.579 12:28:08 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:21.579 12:28:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:20:21.580 12:28:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:20:21.838 12:28:08 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.838 12:28:08 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:21.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.838 --rc genhtml_branch_coverage=1 00:20:21.838 --rc genhtml_function_coverage=1 00:20:21.838 --rc genhtml_legend=1 00:20:21.838 --rc geninfo_all_blocks=1 00:20:21.838 --rc geninfo_unexecuted_blocks=1 00:20:21.838 00:20:21.838 ' 00:20:21.838 12:28:08 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:21.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.838 --rc genhtml_branch_coverage=1 00:20:21.838 --rc genhtml_function_coverage=1 00:20:21.838 --rc genhtml_legend=1 00:20:21.838 --rc geninfo_all_blocks=1 00:20:21.838 --rc geninfo_unexecuted_blocks=1 00:20:21.838 00:20:21.838 ' 00:20:21.838 12:28:08 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:21.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.838 --rc genhtml_branch_coverage=1 00:20:21.838 --rc genhtml_function_coverage=1 00:20:21.838 --rc genhtml_legend=1 00:20:21.838 --rc geninfo_all_blocks=1 00:20:21.838 --rc geninfo_unexecuted_blocks=1 00:20:21.838 00:20:21.838 ' 00:20:21.838 12:28:08 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:21.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.838 --rc genhtml_branch_coverage=1 00:20:21.838 --rc genhtml_function_coverage=1 00:20:21.838 --rc genhtml_legend=1 00:20:21.838 --rc geninfo_all_blocks=1 00:20:21.838 --rc geninfo_unexecuted_blocks=1 00:20:21.838 00:20:21.838 ' 00:20:21.838 12:28:08 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:21.838 12:28:08 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:539e2455-b2a8-46ce-bfce-40a317783b05 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=539e2455-b2a8-46ce-bfce-40a317783b05 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.838 12:28:08 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.838 12:28:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.839 12:28:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.839 12:28:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.839 12:28:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.839 12:28:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:21.839 12:28:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.839 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:21.839 /tmp/:spdk-test:key0 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:21.839 12:28:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:21.839 /tmp/:spdk-test:key1 00:20:21.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.839 12:28:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84908 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.839 12:28:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84908 00:20:21.839 12:28:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84908 ']' 00:20:21.839 12:28:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.839 12:28:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.839 12:28:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.839 12:28:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.839 12:28:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:21.839 [2024-12-06 12:28:08.429485] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:21.839 [2024-12-06 12:28:08.429763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84908 ] 00:20:22.098 [2024-12-06 12:28:08.566240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.098 [2024-12-06 12:28:08.593269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.098 [2024-12-06 12:28:08.629165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:23.031 12:28:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:23.031 [2024-12-06 12:28:09.375530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.031 null0 00:20:23.031 [2024-12-06 12:28:09.407488] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.031 [2024-12-06 12:28:09.407837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.031 12:28:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:23.031 617162693 00:20:23.031 12:28:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:23.031 581018080 00:20:23.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:23.031 12:28:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84926 00:20:23.031 12:28:09 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:23.031 12:28:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84926 /var/tmp/bperf.sock 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84926 ']' 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.031 12:28:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:23.031 [2024-12-06 12:28:09.488817] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:23.031 [2024-12-06 12:28:09.489543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84926 ] 00:20:23.031 [2024-12-06 12:28:09.642393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.031 [2024-12-06 12:28:09.681133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.967 12:28:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.967 12:28:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:23.967 12:28:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:23.967 12:28:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:24.226 12:28:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:24.226 12:28:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:24.485 [2024-12-06 12:28:10.951252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:24.485 12:28:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:24.485 12:28:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:24.744 [2024-12-06 12:28:11.192940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.744 nvme0n1 00:20:24.744 12:28:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:24.744 12:28:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:24.744 12:28:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:24.744 12:28:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:24.744 12:28:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:24.744 12:28:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:25.003 12:28:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:25.003 12:28:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:25.003 12:28:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:25.003 12:28:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:25.003 12:28:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:25.003 12:28:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:25.003 12:28:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:25.262 12:28:11 keyring_linux -- keyring/linux.sh@25 -- # sn=617162693 00:20:25.262 12:28:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:25.262 12:28:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:25.262 12:28:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 617162693 == \6\1\7\1\6\2\6\9\3 ]] 00:20:25.262 12:28:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 617162693 00:20:25.263 12:28:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:25.263 12:28:11 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:25.263 Running I/O for 1 seconds... 00:20:26.199 15597.00 IOPS, 60.93 MiB/s 00:20:26.199 Latency(us) 00:20:26.199 [2024-12-06T12:28:12.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.199 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:26.199 nvme0n1 : 1.01 15606.53 60.96 0.00 0.00 8165.74 5689.72 15013.70 00:20:26.199 [2024-12-06T12:28:12.857Z] =================================================================================================================== 00:20:26.199 [2024-12-06T12:28:12.857Z] Total : 15606.53 60.96 0.00 0.00 8165.74 5689.72 15013.70 00:20:26.199 { 00:20:26.199 "results": [ 00:20:26.199 { 00:20:26.199 "job": "nvme0n1", 00:20:26.200 "core_mask": "0x2", 00:20:26.200 "workload": "randread", 00:20:26.200 "status": "finished", 00:20:26.200 "queue_depth": 128, 00:20:26.200 "io_size": 4096, 00:20:26.200 "runtime": 1.007655, 00:20:26.200 "iops": 15606.531997558688, 00:20:26.200 "mibps": 60.963015615463625, 00:20:26.200 "io_failed": 0, 00:20:26.200 "io_timeout": 0, 00:20:26.200 "avg_latency_us": 8165.73706912698, 00:20:26.200 "min_latency_us": 5689.716363636364, 00:20:26.200 "max_latency_us": 15013.701818181818 00:20:26.200 } 00:20:26.200 ], 00:20:26.200 "core_count": 1 00:20:26.200 } 00:20:26.458 12:28:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:26.458 12:28:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:26.717 12:28:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:26.717 12:28:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:26.718 12:28:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:26.718 12:28:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:26.718 12:28:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:26.718 12:28:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:26.977 12:28:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:26.977 [2024-12-06 12:28:13.590753] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-06 12:28:13.590757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb1d0 (107)k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.977 : Transport endpoint is not connected 00:20:26.977 [2024-12-06 12:28:13.591749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb1d0 (9): Bad file descriptor 00:20:26.977 [2024-12-06 12:28:13.592745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:26.977 [2024-12-06 12:28:13.592770] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:26.977 [2024-12-06 12:28:13.592797] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:26.977 [2024-12-06 12:28:13.592807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:26.977 request: 00:20:26.977 { 00:20:26.977 "name": "nvme0", 00:20:26.977 "trtype": "tcp", 00:20:26.977 "traddr": "127.0.0.1", 00:20:26.977 "adrfam": "ipv4", 00:20:26.977 "trsvcid": "4420", 00:20:26.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:26.977 "prchk_reftag": false, 00:20:26.977 "prchk_guard": false, 00:20:26.977 "hdgst": false, 00:20:26.977 "ddgst": false, 00:20:26.977 "psk": ":spdk-test:key1", 00:20:26.977 "allow_unrecognized_csi": false, 00:20:26.977 "method": "bdev_nvme_attach_controller", 00:20:26.977 "req_id": 1 00:20:26.977 } 00:20:26.977 Got JSON-RPC error response 00:20:26.977 response: 00:20:26.977 { 00:20:26.977 "code": -5, 00:20:26.977 "message": "Input/output error" 00:20:26.977 } 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@33 -- # sn=617162693 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 617162693 00:20:26.977 1 links removed 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@33 -- # sn=581018080 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 581018080 00:20:26.977 1 links removed 00:20:26.977 12:28:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84926 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84926 ']' 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84926 00:20:26.977 12:28:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84926 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.237 killing process with pid 84926 00:20:27.237 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.237 00:20:27.237 Latency(us) 00:20:27.237 [2024-12-06T12:28:13.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.237 [2024-12-06T12:28:13.895Z] =================================================================================================================== 00:20:27.237 [2024-12-06T12:28:13.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84926' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 84926 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 84926 00:20:27.237 12:28:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84908 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84908 ']' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84908 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84908 00:20:27.237 killing process with pid 84908 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84908' 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 84908 00:20:27.237 12:28:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 84908 00:20:27.496 ************************************ 00:20:27.496 END TEST keyring_linux 00:20:27.496 ************************************ 00:20:27.496 00:20:27.496 real 0m5.963s 00:20:27.496 user 0m11.707s 00:20:27.496 sys 0m1.362s 00:20:27.496 12:28:14 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.496 12:28:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:27.496 12:28:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:27.496 12:28:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:27.496 12:28:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:27.496 12:28:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:27.496 12:28:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:27.496 12:28:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:27.496 12:28:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:27.496 12:28:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.496 12:28:14 -- common/autotest_common.sh@10 -- # set +x 00:20:27.496 12:28:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:27.496 12:28:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:27.496 12:28:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:27.496 12:28:14 -- common/autotest_common.sh@10 -- # set +x 00:20:29.400 INFO: APP EXITING 00:20:29.400 INFO: killing all VMs 00:20:29.400 INFO: killing vhost app 00:20:29.400 INFO: EXIT DONE 00:20:30.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.338 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:30.338 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:30.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.908 Cleaning 00:20:30.908 Removing: /var/run/dpdk/spdk0/config 00:20:30.908 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:30.908 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:30.908 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:30.908 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:30.908 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:30.908 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:30.908 Removing: /var/run/dpdk/spdk1/config 00:20:30.908 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:30.908 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:30.908 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:30.908 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:30.908 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:30.908 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:30.908 Removing: /var/run/dpdk/spdk2/config 00:20:30.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:30.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:30.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:30.908 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:30.908 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:30.908 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:30.908 Removing: /var/run/dpdk/spdk3/config 00:20:30.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:30.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:30.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:30.908 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:30.908 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:30.908 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:30.908 Removing: /var/run/dpdk/spdk4/config 00:20:30.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:30.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:30.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:30.908 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:30.908 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:30.908 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:31.181 Removing: /dev/shm/nvmf_trace.0 00:20:31.181 Removing: /dev/shm/spdk_tgt_trace.pid56655 00:20:31.181 Removing: /var/run/dpdk/spdk0 00:20:31.181 Removing: /var/run/dpdk/spdk1 00:20:31.181 Removing: /var/run/dpdk/spdk2 00:20:31.181 Removing: /var/run/dpdk/spdk3 00:20:31.181 Removing: /var/run/dpdk/spdk4 00:20:31.181 Removing: /var/run/dpdk/spdk_pid56508 00:20:31.181 Removing: /var/run/dpdk/spdk_pid56655 00:20:31.181 Removing: /var/run/dpdk/spdk_pid56848 00:20:31.181 Removing: /var/run/dpdk/spdk_pid56935 00:20:31.181 Removing: /var/run/dpdk/spdk_pid56949 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57059 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57069 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57203 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57399 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57547 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57620 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57704 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57790 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57875 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57908 00:20:31.181 Removing: /var/run/dpdk/spdk_pid57938 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58007 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58087 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58529 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58568 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58614 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58630 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58691 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58694 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58761 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58764 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58815 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58820 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58860 00:20:31.181 Removing: /var/run/dpdk/spdk_pid58878 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59001 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59031 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59118 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59440 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59458 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59489 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59502 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59518 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59537 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59550 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59566 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59584 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59593 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59608 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59627 00:20:31.181 Removing: /var/run/dpdk/spdk_pid59641 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59656 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59670 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59683 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59699 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59718 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59731 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59747 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59777 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59791 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59820 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59887 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59915 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59925 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59948 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59964 00:20:31.182 Removing: /var/run/dpdk/spdk_pid59966 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60008 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60022 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60045 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60060 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60064 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60073 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60083 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60087 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60096 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60106 00:20:31.182 Removing: /var/run/dpdk/spdk_pid60129 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60161 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60165 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60199 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60203 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60205 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60251 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60257 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60290 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60292 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60298 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60307 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60309 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60322 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60324 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60327 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60408 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60450 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60557 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60585 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60632 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60652 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60663 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60683 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60715 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60730 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60808 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60824 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60857 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60922 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60967 00:20:31.478 Removing: /var/run/dpdk/spdk_pid60991 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61097 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61140 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61172 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61399 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61491 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61519 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61549 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61581 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61616 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61644 00:20:31.478 Removing: /var/run/dpdk/spdk_pid61682 00:20:31.478 Removing: /var/run/dpdk/spdk_pid62065 00:20:31.478 Removing: /var/run/dpdk/spdk_pid62106 00:20:31.478 Removing: /var/run/dpdk/spdk_pid62446 00:20:31.478 Removing: /var/run/dpdk/spdk_pid62902 00:20:31.478 Removing: /var/run/dpdk/spdk_pid63173 00:20:31.478 Removing: /var/run/dpdk/spdk_pid63995 00:20:31.478 Removing: /var/run/dpdk/spdk_pid64907 00:20:31.478 Removing: /var/run/dpdk/spdk_pid65030 00:20:31.478 Removing: /var/run/dpdk/spdk_pid65093 00:20:31.478 Removing: /var/run/dpdk/spdk_pid66521 00:20:31.478 Removing: /var/run/dpdk/spdk_pid66827 00:20:31.478 Removing: /var/run/dpdk/spdk_pid70432 00:20:31.478 Removing: /var/run/dpdk/spdk_pid70773 00:20:31.478 Removing: /var/run/dpdk/spdk_pid70888 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71017 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71037 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71054 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71075 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71169 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71310 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71461 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71537 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71719 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71802 00:20:31.478 Removing: /var/run/dpdk/spdk_pid71882 00:20:31.478 Removing: /var/run/dpdk/spdk_pid72234 00:20:31.478 Removing: /var/run/dpdk/spdk_pid72636 00:20:31.478 Removing: /var/run/dpdk/spdk_pid72637 00:20:31.478 Removing: /var/run/dpdk/spdk_pid72638 00:20:31.478 Removing: /var/run/dpdk/spdk_pid72905 00:20:31.478 Removing: /var/run/dpdk/spdk_pid73167 00:20:31.478 Removing: /var/run/dpdk/spdk_pid73543 00:20:31.478 Removing: /var/run/dpdk/spdk_pid73549 00:20:31.478 Removing: /var/run/dpdk/spdk_pid73869 00:20:31.478 Removing: /var/run/dpdk/spdk_pid73887 00:20:31.479 Removing: /var/run/dpdk/spdk_pid73901 00:20:31.479 Removing: /var/run/dpdk/spdk_pid73932 00:20:31.479 Removing: /var/run/dpdk/spdk_pid73937 00:20:31.479 Removing: /var/run/dpdk/spdk_pid74282 00:20:31.479 Removing: /var/run/dpdk/spdk_pid74331 00:20:31.479 Removing: /var/run/dpdk/spdk_pid74658 00:20:31.479 Removing: /var/run/dpdk/spdk_pid74849 00:20:31.479 Removing: /var/run/dpdk/spdk_pid75274 00:20:31.479 Removing: /var/run/dpdk/spdk_pid75818 00:20:31.479 Removing: /var/run/dpdk/spdk_pid76696 00:20:31.479 Removing: /var/run/dpdk/spdk_pid77327 00:20:31.479 Removing: /var/run/dpdk/spdk_pid77330 00:20:31.479 Removing: /var/run/dpdk/spdk_pid79334 00:20:31.479 Removing: /var/run/dpdk/spdk_pid79387 00:20:31.479 Removing: /var/run/dpdk/spdk_pid79434 00:20:31.479 Removing: /var/run/dpdk/spdk_pid79486 00:20:31.479 Removing: /var/run/dpdk/spdk_pid79603 00:20:31.750 Removing: /var/run/dpdk/spdk_pid79646 00:20:31.750 Removing: /var/run/dpdk/spdk_pid79699 00:20:31.750 Removing: /var/run/dpdk/spdk_pid79759 00:20:31.750 Removing: /var/run/dpdk/spdk_pid80111 00:20:31.750 Removing: /var/run/dpdk/spdk_pid81329 00:20:31.750 Removing: /var/run/dpdk/spdk_pid81469 00:20:31.750 Removing: /var/run/dpdk/spdk_pid81703 00:20:31.750 Removing: /var/run/dpdk/spdk_pid82285 00:20:31.750 Removing: /var/run/dpdk/spdk_pid82449 00:20:31.750 Removing: /var/run/dpdk/spdk_pid82607 00:20:31.750 Removing: /var/run/dpdk/spdk_pid82706 00:20:31.750 Removing: /var/run/dpdk/spdk_pid82859 00:20:31.750 Removing: /var/run/dpdk/spdk_pid82968 00:20:31.750 Removing: /var/run/dpdk/spdk_pid83673 00:20:31.750 Removing: /var/run/dpdk/spdk_pid83707 00:20:31.750 Removing: /var/run/dpdk/spdk_pid83738 00:20:31.750 Removing: /var/run/dpdk/spdk_pid83993 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84028 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84063 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84533 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84543 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84781 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84908 00:20:31.750 Removing: /var/run/dpdk/spdk_pid84926 00:20:31.750 Clean 00:20:31.750 12:28:18 -- common/autotest_common.sh@1453 -- # return 0 00:20:31.750 12:28:18 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:31.750 12:28:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.750 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:20:31.750 12:28:18 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:31.750 12:28:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.750 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:20:31.750 12:28:18 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:31.750 12:28:18 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:31.750 12:28:18 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:31.750 12:28:18 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:31.750 12:28:18 -- spdk/autotest.sh@398 -- # hostname 00:20:31.750 12:28:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:32.010 geninfo: WARNING: invalid characters removed from testname! 00:20:53.948 12:28:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:57.242 12:28:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:59.147 12:28:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:01.681 12:28:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:04.214 12:28:50 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:06.751 12:28:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:08.735 12:28:55 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:08.735 12:28:55 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:08.735 12:28:55 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:08.735 12:28:55 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:08.735 12:28:55 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:08.735 12:28:55 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:08.735 + [[ -n 5253 ]] 00:21:08.735 + sudo kill 5253 00:21:08.782 [Pipeline] } 00:21:08.796 [Pipeline] // timeout 00:21:08.801 [Pipeline] } 00:21:08.814 [Pipeline] // stage 00:21:08.818 [Pipeline] } 00:21:08.833 [Pipeline] // catchError 00:21:08.841 [Pipeline] stage 00:21:08.842 [Pipeline] { (Stop VM) 00:21:08.854 [Pipeline] sh 00:21:09.133 + vagrant halt 00:21:12.421 ==> default: Halting domain... 00:21:18.992 [Pipeline] sh 00:21:19.269 + vagrant destroy -f 00:21:21.795 ==> default: Removing domain... 00:21:22.065 [Pipeline] sh 00:21:22.345 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:21:22.354 [Pipeline] } 00:21:22.366 [Pipeline] // stage 00:21:22.370 [Pipeline] } 00:21:22.381 [Pipeline] // dir 00:21:22.386 [Pipeline] } 00:21:22.397 [Pipeline] // wrap 00:21:22.403 [Pipeline] } 00:21:22.414 [Pipeline] // catchError 00:21:22.423 [Pipeline] stage 00:21:22.425 [Pipeline] { (Epilogue) 00:21:22.436 [Pipeline] sh 00:21:22.716 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:28.019 [Pipeline] catchError 00:21:28.021 [Pipeline] { 00:21:28.037 [Pipeline] sh 00:21:28.318 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:28.575 Artifacts sizes are good 00:21:28.585 [Pipeline] } 00:21:28.602 [Pipeline] // catchError 00:21:28.643 [Pipeline] archiveArtifacts 00:21:28.661 Archiving artifacts 00:21:28.784 [Pipeline] cleanWs 00:21:28.794 [WS-CLEANUP] Deleting project workspace... 00:21:28.794 [WS-CLEANUP] Deferred wipeout is used... 00:21:28.799 [WS-CLEANUP] done 00:21:28.800 [Pipeline] } 00:21:28.812 [Pipeline] // stage 00:21:28.816 [Pipeline] } 00:21:28.828 [Pipeline] // node 00:21:28.832 [Pipeline] End of Pipeline 00:21:28.861 Finished: SUCCESS